How Think Tanks Work: Analyzing Budgets

17 July 2013

We often discuss think tanks in ways similar to how we talk about cities: what took us to them, which services worked particularly well, who we met, and which impressions we took away. Personal experience is indispensable in understanding cities, and think tanks. But it also is incomplete. Our personal and intellectual travels only cut a thin path. To get a broader perspective we need a more systemic view, especially if our understanding is to inform what we do.

A closer look at the numbers offers one such systemic perspective. Aggregate + numbers are instructive for at least three reasons. They offer:

  1. Rules-of-thumb: median practices present approximations on how think tanks can operate.
  2. Diversity: the numbers illustrate how different the think tanks are, on various dimensions. Numbers make upper and lower boundaries visible at a glance.
  3. Total size: taken together, the numbers give us an excellent sense of the scope of think tanks as a group. The aggregates establish context, locally, nationally and internationally.

In what follows, I will analyze 20 prominent US think tanks. The institutions are drawn from a combination of the lists compiled by James McGann at the University of Pennsylvania, and David Roodman and Julia Clark at the Center for Global Development. For brevity I will refer to “top 20” to denote their prominence, not any other quality. The proposed approach emphasizes their diversity. The analysis of the cohort complements a previous post in which I looked at the Brookings Institution to show how numbers can illuminate our understanding of a single think tank.

All data draws on tax declarations submitted to the Internal Revenue Service (IRS), rendering numbers that were accurate by 2011, as in saying “by 2011, the World Resources Institute had net assets of $53m”. An update for 2012 will follow as soon as all IRS data is available. The delays are imperfect, but perhaps the more salient point is that the IRS has exemplary requirements for nonprofits to be transparent. Those impatient to do their own analysis can jump to the end of the post for the link to the spreadsheet, which also contains technical notes.

Behemoths, Midfield & Boutiques

So what are the rules of thumb, how diverse is the cohort, and where do aggregate numbers leave us?

The US top 20 is heterogeneous. For budget size, the group median is $29m. Yet the diversity is striking. With an annual budget of $263m, the RAND Corporation, the largest institution by far, is more than 35 times the size of the smallest institutions of the group, and nine times the median size. Brookings ($90m) is the next biggest institution, more than three times the median size. The Heritage Foundation ($80m) and the Urban Institute ($64m) are the other two behemoths in the cohort.

The majority of institutions are grouped loosely around the median size. Eleven think tanks have budgets between $20m and $40m, with an extensive portfolio of programs. Next to these midfield institutions there are a handful of boutique think tanks, often with a focused profile, such as the Peterson Institute of International Economics ($11m), the Center for Global Development (CGD, $10m), or the Atlantic Council ($7m), but also the New America Foundation (NAF, $16m), a think tank that according to one commentator “has pioneered the introduction of journalism to the traditional think tank mix of advocacy and research”.

Aggregate numbers suggest that US think tanks are an industry, at least in scope: in 2011, the budget total of the group was nearly $900m. This is more than 10 times the estimated total budget of the 20 most prominent think tanks in the United Kingdom. A rough calculation suggests that it also is a multiple of what Africa spends on all of its policy-related research. (While reliable numbers are scarce, a recent report says that annual social science funding in South Africa amounts to around $200m, all disciplines including economics, all universities and institutions, all public and private funding.) The median, too, helps illustrate that the difference is almost zoological in character: think tanks that are boutique in Washington would be behemoths in London. Median think tank size in the UK is less than $2m, not even a tenth of the US figure.

Assets, Gearing & Fundraising

To finance future activity, US think tanks have accumulated large assets. The best figure to look at are net assets, a combination of funding that donors have formally committed plus endowment, minus current liabilities (mortgages, bank debts, or pension obligations). The median size of net assets is $67m, about 2.3 times the median annual budget. Keeping net assets more than 2 times the budget, and vice versa, may thus be another rule of thumb for sustaining performance.

After Brookings ($299m), the Carnegie Endowment for International Peace (CEIP) has the largest net assets ($253m). Financially, the Carnegie Endowment is run conservatively, as its annual budget only amounts to 11% of its net assets. In its caution, the Carnegie Endowment is closer to a foundation than any of the other think tanks in the group. Cautious, too, in their financial gearing are the Peterson Institute (17%), the American Enterprise Institute (AEI, 19%), and the Woodrow Wilson Center (19%). On the other end of the scale are the RAND Corporation (146%) and the Center for New American Security (CNAS, 292%). Both are unusual in that their budgets are bigger than their net assets. At RAND, this reflects large liabilities (about $130m for the building of its Santa Monica headquarters, see here) and perhaps more transactional project-focused funding, as opposed to the program support that many DC-based think tanks enjoy. As for CNAS, its $2m are the smallest net assets of the entire cohort, even though it has a $5m annual budget. It will be interesting to see how CNAS develops over the next years.

In total, US think tanks have net assets of more than $2 billion. Although UK think tank funding is not particularly transparent, Chatham House appears to be the UK think tank with the largest net assets, at $14m. This is less than a 20th of Brookings’ net assets, and would only cover Chatham House’s operations for a bit more than a year. Even allowing for accounting differences (how to factor in owned property), establishment UK think tanks look woefully underfed by comparison with their US counterparts.

Yet US think tanks also are (and have to be) hunting and gathering funds, even if they have a solid reserve. Expressed in broadly rounded figures, US think tanks invest a bit under 5% of their budgets into fundraising, about 20% into administration, and 75% into programming. The percentages are a reminder that US think tanks are explicit about investing money into generating future funding, in ways which are not necessarily followed or even entirely understood among institutions in less established contexts.

Although more than half the institutions invest between 3% and 7% of their budget in fundraising, there are several outliers, even allowing for differences in reporting styles. The Heritage Foundation says that 36% of its budget goes to fundraising, and only 3% to administration. Heritage probably sees much of its leadership activity as an effort to mobilize resources, perhaps in line with its emphasis on engaging its large membership base. On the other side we find institutions that report fundraising percentages of 1%, such as RAND or the Urban Institute, illustrating that they have more of a contract model; the Pew Research Centers (1%), with its own endowment; as well as institutions that have more stable sources of income, such as the National Bureau of Economic Research (NBER, 0%).

Staffing: Companies, Platoons & a Full Opera House

As a median, US think tanks have 157 employees. In the military terms that “tank” implies, this is the size of a large infantry company. Yet RAND employs more than 2000 staff, almost a small brigade. Heritage (530), Brookings (530) and the Urban Institute (404) have roughly the size of a battalion. The Atlantic Council (46), International Crisis Group (ICG, 44), Center for New American Security (37) could field a platoon. Useful for illustration as the military analogy is, it was perhaps a missed opportunity at inception not to call policy research institutions “think orchestras”.

IRS declarations also give us a view onto staff composition: as a median, roughly 20% of staff are senior, commanding salaries of more than $100,000 per year. This level of compensation is roughly equivalent to the reported median salary for full professors. The rule of thumb is fairly solid: 16 of 20 institutions have between 13-27% of staff with salaries above $100k. The outliers show different staffing set-ups: ICG reports that 48% of their staff receive more than $100k, probably reflecting that a significant percentage of their staff works outside the US. US-hired staff likely receive benefits for being posted abroad. Moreover, ICG probably do not declare staff hired in foreign offices to the IRS, as they are not US-taxed employees. This explanation tallies with ICG’s high budget per staff ($405k).

On the other end of the staffing spectrum we find institutions that seem to pay for extensive part-time work, or mobilize many junior staff. NBER lists 695 employees, 2% of them paid above $100k, and on average spends about $52k per staff, suggesting that many are part-time. Similarly, the Center for American Progress (CAP) reports that it has 325 employees of which 2% earn more than $100k. CAP may thus be a model for think tanks that want to understand how to mobilize a broader and more junior staff base.

In aggregate, the top US think tanks employ more than 6500 staff, almost the size of a small division. More than 1200 staff have a salary of $100k or more. RAND employs almost a third of all think tank staff and more than 45% of those with salaries higher than $100k. Yet even without RAND staff the employees of the top 20 US think tanks could fill some of the world’s larger opera houses.

Top 20 Analysis – Next Step

Thus the overview on some emerging rules of thumb, the diversity in the cohort, as well as aggregate numbers. Established think tanks could use these benchmarks to understand their own position vis-à-vis the field. Growing think tanks can gain orientation from the numbers, and tailor successful models to their own needs. It’s an attempt to provide orientation in the way in which we map cities in space, and with their indicators, to make them legible.

Technically I hope to have demonstrated that 20 institutions is a good unit of analysis. Top 20 analysis captures a range of practice, while also offering a sensible limitation, since adding information for many more think tanks becomes impractical. As illustrated, the cohort has significant diversity, allowing for good comparison. The same principle could be applied to think tanks from other countries, or even regions. It would help if the entire field became more transparent about its finance.

Yearly tracking of 20 institutions can show us how the field develops. This information will be easy to capture, with an open and replicable methodology. A top 20 analysis can thus complement other efforts, such as James McGann’s ranking, or the innovative approach of measuring think tank reach by David Roodman and Julia Clark. While it is tempting to play efforts against each other, their complementarity is more promising. (In taking a look at think tanks from Australia recently, I would not have known where to start without McGann’s list. The reach-per-dollar calculation by David and Julia inspired me to look more closely at budgets.) As for annual comparisons, 2012 numbers will be out soon; only three IRS forms out of 20 are not yet published.

The Excel spreadsheet is available for a closer look. Among additional organizations covered but not specifically mentioned are the Center on Budget and Policy Priorities (CBPP), Center for Strategic and International Studies (CSIS), and the German Marshall Fund of the US (GMFUS).

We are requesting registration, first because we are (selfishly) interested in data on who the happy few are who find these posts useful. Secondly, while we double-checked the data, there are many moving parts and this is the first shot at capturing aggregate numbers. We want to be able to reach readers in case we have corrections. As always comments and questions welcome. What did you find useful? How could we improve Top 20 Analysis? How else could one use these numbers?

If you are interested in checking out the spreadsheet (which we try to make as usable as possible, it has medians, averages, maximum and minimum values, and is sortable), click here.

[2018 update: the link above takes you to old and new data, and there is a newer post on this, here. The older data has more categories so it may be interesting for the details of operation — they typically also do not change much, over time. If you found this useful, we are grateful for a quick hello via Twitter.]