[Editor’s note: This article presents Jaime González-Capitel’s Master’s Thesis Who Used Our Findings? Framing Collaborations between American Foundations and Think Tanks Through Practices in the Evaluation of Policy Influence. Together with Professor Francesc Ponsa and a team of raters, Jaime has conducted a comprehensive evaluation of Spanish think tanks applying Transparify’s framework.]
A short personal genesis of the project
Right before I embarked into my graduate adventure of studying think tanks at Georgetown University in the Summer of 2014, Transparify published its first donor transparency rating. Although my sources included On Think Tanks and this 2012 post by Andy Williamson, the coincidence was striking: my proposal to the Fulbright Commission in Spain had been phrased as an attempt at determining the type of parameters and indicators that could be applied to policy research organizations with the aim of evaluating their transparency and accountability. Given my background in political philosophy, I was interested in posing an ample problem that could be used to embark in a number of different research directions, and the question around transparency could do just that. Interestingly, by narrowing the view on donor transparency, Transparify’s framework allowed me to make further questions in a more specific direction:
- How is donor transparency useful for think tank researchers?
- How do donors frame returns on investments?
Framing the research question
Slowly, these questions led to my interest in the relationships between think tanks and foundations in America. Contrarily to the government, corporations or individuals, foundations are in the business of giving money, and they have an increasing commitment with grantee evaluation. However, the influence of think tanks is very hard to gauge, turning the calculation of returns of investment into a conflictive issue. In this context, the research question for my Master’s thesis started as:
- RQ1: Do foundations evaluate the work of think tanks?
As my literature review unearthed the evaluation of the Center for Global Development in 2006, the question turned from merely identifying whether studies where happening to understanding the details, motivations and agendas behind the practices, thus resulting in the final research question:
- RQ2: How is public policy research evaluated at the intersection of foundations and think tanks?
The main hypothesis is the conformation of the original research question.
- H1: Think tanks are more likely to evaluate their work when they receive foundational funding
Two additional hypothesis were added when the literature review identified existing practices in the fields of developmental think tanks and advocacy-oriented organizations, but little in the hardcore realm of foreign policy or multi-issue centers.
- H2: Think tanks with an orientation to advocacy are more likely to evaluate their work
- H3: Think tanks invested in international development are more likely to evaluate their work than think tanks invested in domestic policy
In other words, the “mission” of the thesis was to identify existing practices in the evaluation of think tanks, and contrast whether they are driven by the funding pressure exerted by foundations.
Research methods
The study employed two empirical methods to test these practices: a survey of grey literature and a set of interviews to experts.
Method 1: Surveying the grey literature
I reviewed two types of grey literature: the quantitative indicators and qualitative arguments used as evidence of influence in the annual reports of a dozen of think tanks, and the evaluation policies of selected foundations, with special attention to whether they evaluate research and at which level (project/program/organization) they evaluate their grantees.
The results were supplemented with two case studies, which function as a recognition of the diversity that can be found in the field and stand in strong contrast with the general patterns identified in the preceding analyses: the already mentioned evaluation of the CGD, and the W.T.Grant Foundation’s portfolio on research utilization.
Method 2: Expert Interviews
For the qualitative analysis of semi-structured interviews, experts where identified through a mixture of direct search, existing contacts and a snowballing technique, and responded to three profiles: foundation staffers, think tank staffers, and independent evaluation experts. Foundations selected had an interest in effective grantmaking, grantee evaluation, and/or organizational learning, a proven record in funding research and independent policy research organizations, and/or advocacy and policy change; while think tanks had at least 20% of foundation funding.
Interview data was then analyzed along three major dimensions:
- Context, which identifies examples and situations with some detail;
- Respondents’ profiles, and
- Broader themes.
The attention to organizational context allowed to extract different case studies that provided the evidence with which to test the three hypothesis; while themes and profiles offered valuable insights into the interconnection between the different issues, motivations and innovations at stake. The analysis also set out to identify best practices and recommendations to the two fields of philanthropy and policy research organizations.
Respondents and bias
Half of the 22 interviews conducted between February 1st and March 31st, 2016 corresponded to foundation staffers, with only 7 members of think tanks and 4 independent evaluation experts. While for foundation officers the conversation about grantee evaluation is part of their usual business, for think tanks it seems a difficult terrain to walk into. The response bias overlaps with an ideological one: none of the organizations represented can be labeled as conservative, with a good portion of them, especially on the philanthropic side of the equation, clearly among the liberal ranks.
The reason is well known: in the United States, while conservative foundations tend to fund organizations they like, liberal foundations have a tendency to fund only efforts that can prove their effectiveness and show results. And that’s precisely the murky area where think tanks are likely to feel uneasy.
In spite of these biases, the interview analysis yielded some considerable results in 3 areas: case studies, inter-organizational relationships and best practices and recommendations.
Case studies: Funders play a role in think tanks’ evaluating efforts
A case study was extracted from each of the interviews to think tank staffers. The 6 resulting case studies include prominent DC-based think tanks like Brookings, Urban Institute, as well as others whose identity cannot be disclosed for confidentiality reasons. Richard Bush III, a Senior Fellow at Brookings expressed what may be considered a representative voice of concerns towards measuring influence:
What can be measured quantitatively may not indicate high quality impact […]. If somebody writes fifty op-ed articles that are placed in various journals around the country, and in some way enter the competition of ideas, ¿how can that compare with a scholar spending fifteen minutes with the President of the United States? I think the question answers itself.
Four of the case studies directly support the main hypothesis that think tanks are more likely to evaluate their work when backed by foundations, with each organization at a quite different stage of development: from a dedicated grant for the design of a logic model that will be adopted as a fundraising tool to an extraordinary effort to raise internal capacities and reporting tools with no formal support of the board, and an internal evaluation unit that works both organically and on demand for funders of third party projects.
Finally, I also interviewed Peter Taylor from IDRC and Sarah Lucas from the Hewlett Foundation, who explained the approach of the Think Tank Initiative towards M&E. In all these cases, think tanks have developed their M&E capacities to respond to the accountability demands of donors, which in most cases are foundations.
In what I see as a qualified support to the thesis’ core hypothesis, the Urban Institute launched in 2014 a new Policy Advisory Group which has grown to 25 members, and (according to their own website) is devoted to “better understanding decisionmakers’ needs and challenges”, as well as “systematically translat[ing] what we know into strategies for actionable work on the ground”. While there is no explicit discourse about evaluation, the renovated emphasis towards results and impact is still largely driven by foundations, which, in stark contrast with the tradition of the Institute as a government contractor, are the group’s major funders.
Best practices and Recommendations
Julia Coffman’s approach to message tracking stands out among the best practices in innovative evaluation: in spite of how hard it may seem to measure influence, it’s always possible to survey key audiences and gauge the reception of certain messages as they reflects in attitudes, language and knowledge. Bellwether methodologies are probably the best example of this approach, which has been applied so far to advocacy efforts. Its expansion may require think tanks to recognize their practice of advocacy, at least with a lower case “a” –a distinction that I take from a great book by Bogenschneider and Corbett.+ But I think that, more generally, they can match the more traditional mission of informing, educating and articulating policy problems in new ways.
In general, it’s important to notice that the interest in evaluating advocacy efforts does not relate directly to legislative success of programs and campaigns, which depends from a plethora of different factors out of reach for any given actor. Most foundations recognize that complexity. In the cases of most advanced organizations, their pressure about results is more about understanding how they and their partners may have contributed to shifting views, positions and perspectives around certain issues. Properly conceived, the connection between evaluation and policy research (or even more generally, the policy-related intelligence products think tanks stand out for), may be supportive for policy research institutions in understanding how successful they have been in engaging with significant actors and consequently contributing to moving the needle around prioritary issues. Coffman warns, however:
I don’t think that think tanks are very good at [measuring how a message resonates with its audience…] they are better at thinking about audiences first hand and getting the questions right, they have a good design at the front end, but then it kind of stops.
Further research
Given the modest scope of the study, with only 6 think tanks surveyed, there is ample room for further research that seeks to test the hypothesis. For example, an interesting direction would be to frame discussions around ROI for conservative institutions, including the very few that do have evaluation frameworks in place. Also, the secondary hypotheses (on whether think tanks specialized in development work and advocacy are more likely to engage in impact evaluation) were not supported because of the paucity of data that I was able to gather.
However, there may be more interesting directions. An important question that I was not able to address directly is what the motivation should be for think tanks to engage in evaluation: will it remain an extrinsic problem reduced to the adoption of funders’ language that may streamline fundraising? Or does it have additional internal advantages for strategic management and reputation, for example as part of an accountability framework that can enhance the position of think tanks in the discredited democratic processes of our time?
At a more technical level, important questions remain on whether a more direct connection can be drawn between the activity and output indicators used by think tanks in their annual reports, and the kind of tough questions that an evaluator is likely to ask. In other words: can the recognition of the difficult match between evaluating influence and doing sound policy research advance the state of the art in what the accepted practices should be? This line of research is quite close to the question of whether the tools deployed in developmental work have an application in the domestic arena, tying back to the hypothesis testing described above.