[Editor’s note: This is the second of a series edited by Elizabeth Brown, Aprille Knox, and Courtney Tolmie at Results for Development, focusing on think tanks’ context. This post is written by Elizabeth Brown. The series addresses a subject of great importance to think tanks as well as to those supporting them. It provides a substantial contribution to On Think Tanks’ efforts to promote a more nuanced discussion on the subject. If you want to be a guest editor for On Think Tanks please get in touch. This post is based on the study “Linking Think Tank Performance, Decisions, and Context,” undertaken by Results for Development Institute in partnership with the University of Washington and with generous support by the Think Tank Initiative.]
The talk in real and online spaces where think tanks and donors convene suggests context does the opposite of levelling the playing field: it changes the rules of the game in which think tanks strategise and perform.
If the rules guiding performance and success vary by context, then donors, think tankers and scholars should account for these differences when evaluating think tanks’ performance across diverse settings. For instance, how should donors encourage and measure think tanks’ performance across a portfolio? Does a well-performing think tank influence the policy debate? Or can it do even better: have its policies adopted by government? And for think tank scholars, what methods can be used to account for context’s impact on performance while controlling for the many other factors at work?
Our recent mixed methods study, Linking Think Tank Performance, Decisions, and Context, suggests that the evidence-base supporting donor and think tank policy on context is early-stage. We see a lot of opportunity and some challenges for scholars interested in answering policy questions like those posed above.
Before describing the opportunities and challenges, it’s worth noting two things. First, context study is relatively new and the methods applied have been appropriately exploratory. For example, all 23 of the context studies we examined were produced within the last 15 years, and most applied exploratory and small N methods, like historical and country case analysis.
Second, every study is different and researchers need to appropriately match qualitative and quantitative methods to the research questions. With these points in mind, we now turn to some insights gleaned from our recent work.
As a first insight, the number of context factors documented in exploratory studies is mind-bendingly large. Scholarship has progressed in the area of making sense of these factors, but information about their relative importance in different contexts is still missing. For example, we found over 250 context items when searching all 23 studies for their every mention. We classified these factors according to whether they were determined by forces mainly or entirely outside of the think tanks’ control, or determined by the think tanks’ choices. This approach was also used in “Far Away from Think Tank Land: Policy Research Institutes in Latin America.” Distinguishing exogenous and endogenous factors is an important step leading up to policy research that eventually identifies the effects of external context while controlling for the choices think tanks make.
The second insight is the importance of measuring context in clearly defined and commensurable ways. The four exogenous factor groups in our study—political and economic, donor environment, civil society, and intellectual climate—were formed by combining similar and related context sub-categories from the literature and fitting them to relevant categories. For example, we fit the ‘number and strength of political parties’, ‘authoritarian government’ and ‘instability and high turnover of key government positions’ together under the under the category of political context.
The advantage of this method is that it led us to develop categories that were fairly straightforward to operationalize. For example, we used widely available country-level indicators from the World Bank and the World Development Indicators among other sources to measure and test the groupings in a sample of roughly 100 think tanks. This testing helped demonstrate the relative importance of political context in comparison to the donor environment, civil society context, and the intellectual climate when looking across think tanks’ experience in 48 countries. However, these blunt instruments are subject to measurement error and more research attention should be paid to developing appropriate instruments to measure the relative importance of context across multiple settings.
Third, research doesn’t have to be quantitative in order to derive important implications, but it does need to be rigorous. Strong qualitative research is routinely used in the social sciences to sharpen description, develop and test concepts, and contribute to theory-building. For example, after applying a careful and transparent case selection and matching process, our field research in Peru, Zimbabwe, Bangladesh, and Vietnam strongly suggested that two external context factors, political competition and government effectiveness, fundamentally affect a think tanks’ ability to influence policy. While government effectiveness affects a think tank’s policy influence, political competition affects whether a strategy of affiliation with a political party or independence is the more effective way.
The strength and power of these empirical observations relied on careful case selection and the use of rigorous field methods. Therefore researchers should pay careful attention to case selection when they undertake comparative case research.
Finally, the absence of a sampling frame for think tanks makes it difficult to test the generalizability of qualitative findings derived from even the most rigorous small N studies. Surveys using convenience samples will likely suffer from selection bias. For instance, our sample of about 400 think tanks was cobbled from lists provided by two think tanks’ donors, an NGO partner of developing-country think tanks, and internet searches of think tank forums, conferences, and events. The sample likely included a higher proportion of think tanks with external donor ties, and fewer locally-funded think tanks. As a result, we cannot control for donor selection in our sample, which makes deriving policy implications from the analysis impossible. Thus researchers should either switch to snowball sampling methods (or other alternatives) or work collaboratively with donors and think tanks to identify the population of think tanks globally, even perhaps by establishing a registry of think tanks or policy research organizations.
That think tanks supported by international donors do not operate on a level playing field is now a widely acknowledged point. The context and performance studies examined in our literature review have contributed to this result by calling out the enormous range of context factors at work. Our research built on this work by contributing a framework for thinking about context, ways of operationalizing context factors, and rigorous small N methods for comparative case analysis.
Looking into the future, think tanks scholars have the opportunity to address the ‘so what’ policy questions posed above in order to develop an evidence base to support the policies of donors and think tanks alike. Applying appropriate methods to do this is a critical next step for the field.