A think tank ‘knowledge society’ index – is it possible?

24 January 2014

[Editor’s note: this post was written by David Walker, Research Officer at the Overseas Development Institute. David worked in the Think Tank Initiative’s evaluation and this post presents an opportunity to reflect on a challenge he tried to address as part of the evaluation.]

It seems indexes are de rigueur nowadays. There is a new one popping-up, expanding its remit or refiguring its usability every other day. And why not? If you want something to back-up a line of enquiry, and you need something officious looking that will befuddle any persistent criticism quite quickly, a custom made index will do nicely.. Of course, they have less cynical uses: we know they can cut through noise, be constructed and weighted around some very niche themes, and be used to great effect as part of ‘stick and carrot’ advocacy campaigns. The downside of course is that you have to make the index in the first place, traversing and confronting all manner of conceptual and methodological quandaries in the process. Moreover, if ever you do want to send your index out to be applied in the real world, you may be responsible for all sorts of misconceptions and re-appropriations.

I recently had the challenge of developing what came to be known as a ‘Knowledge Society Index’ for ODI’s Research and Policy in Development Programme (RAPID) and the  European Centre for Development Policy Management (ECDPM). The idea was to have a numerical representation of the degree to which different national contexts provide enabling environments for evidence-based policy making. More specifically, the index was designed to feed into an interim evaluation of the Think Tank Initiative (TTI)  – a multi-donor programme dedicated to strengthening the capacity of independent policy research organisations in the developing world.

While primarily a process evaluation, focusing on whether the initiative’s theory of change was sound, and whether it was delivering its intermediate outcomes, strengthening the capacity of the think tanks, the evaluation team were also asked to assess whether there were any indications that this was affecting the think tanks ability to engage with and shape policy. To do this, the evaluation needed to assess the relative evidence-based policymaking climate within which think tanks operate. This exercise amounts to a context assessment of the knowledge supply and demand structures for a given national context, with the ultimate aim of determining the degree to which this environment supports or undermines the activities of Think Tanks. This information was essential to locate the observed performance of a given Think Tank in relation to whether it is working in a supportive or obstructive context.

In terms of developing a framework of analysis, our starting principle was that a given Think Tank would be more able to function effectively and efficiently in an environment that demonstrates both an appetite and application of various types of knowledge (academic, ‘grey’, participatory/citizen-based, etc.) across all societal levels and actors. In other words, an optimum functioning environment for a Think Tank is one in which structures and agents collectively create numerous of ‘vertical’ and ‘horizontal’ pathways for knowledge to inform policy.

The first step in developing the index was to figure out what was already out there. An over-view of the range of knowledge and governance-based indices available demonstrated that there are predominantly three types of thematic focus across these indices. The first examines societal spheres, such as cross-country comparisons regarding issues of governance, economic or social structures, formal and informal institutions and agents, public and private etc. (Change Readiness Index, WB Governance Indicators, Bertelsmann-Stiftung Transformation Index). The second broad body of work compares contexts by exploring the stage or type of knowledge practice, such types of knowledge procurement processes, creation, diffusion, usage or use/access to innovative technologies (e.g. Knowledge Management Index, Innovation Capacity Index, Technology Achievement Index (TAI)).

A third array of literature compares variables in relation to political regime or developmental stage, such as relationships between democracies and autocracies, high-middle-low income countries, or complexity of economy (Sharma et. al. (2010), WEF Competitiveness Index (GCI)). Arguably, there is a smaller fourth body of literature that blend these dimensions, as demonstrated by the Polity IV framework which has separate sub-components for types of political regime.

In order to cut through all of this we looked for lowest common denominators and came out with the classic ‘three intersecting circles’ image,  i.e. the idea that a think-tank focused knowledge index is best constructed by drawing on broad ‘governance’, ‘social’ and ‘economic’ categories. However, we also realised that this was not the full picture – what was missing was something broader that connects these dimensions.

Enter the concept of ‘Knowledge Society’. This is admittedly an ambiguous term – but the best usually are. It differs from ‘knowledge economy’ in the literature on the basis that a Knowledge Society lens incorporates considerations for the redistributive/normative functions of state to provide public access and control of information. The latter concept, (Knowledge Economy) is primarily a lens for understanding profit-driven dimensions or economic imperatives. In other words, the  functions of maintaining political stability, ensuring good governance and creating a ‘flat society’ are key drivers in the evolution of ‘knowledge societies’, whereas in ‘knowledge economy’ literature, knowledge is largely a dimension that promotes competitive advantage. ‘Knowledge Society’ aspects therefore move away from concerns such as research & development financing and capacities to innovate. Instead, they move toward wider equity considerations and access to knowledge and education, such as development of public ICTs, internet usage per capita, newspaper circulation and investments into public libraries. This is why we named the index the ‘Knowledge Society Index’ rather than the ‘Knowledge Economy Index’. This, and the fact that a KEI already exists (but was too simplistic for our needs).

To cut a long story short, this framework led us to a series of hypothesis which are presented in the diagram below.

framework

We ended-up selecting 77 sub-indicators from an existing database of 18 indexes to feed into all of these areas (and this was the simplified version!). This included the predictable and generic (‘rule of law’, ‘control of corruption’, ‘doing business index’) to the more specific (‘researchers in R&D’, ‘number of patents per capita’, ‘technological readiness’ and ‘newspaper circulation rates’, etc.). Underneath is a quick taste of what the final product looks like for 6 of the 9 focus countries (Bangladesh, India, Paraguay, Nepal, El Salvador, Ethiopia, Uganda, Nigeria, Senegal). For perspective, each quadrangle is divided by region (East Africa, West Africa and Latin America) but also includes some ‘wild cards’ from within the region, as well as from high-income countries. This serves to provide a sense of scale and indicate some of the variances observed in the findings. Note that the purpose here is not be normative – i.e. putting the USA (or another high-income country) as the ‘target’ per se, but rather to determine strengths, weaknesses, and entry points (more on that below).

East Africa

E Africa

West Africa

West Africa

South America

SAmerica

This all very interesting, and I’m sure we can talk at length about what we appear to be seeing. Here are just a couple of points for starters: first step is to look at the surface area of each country. This is the overall index score and the summary of the relative enabling environment for think tanks. Second, the shape of the diamond is a give-away for whether the country is an ‘all-rounder’ or not. In theory, this indicates whether any capacity development initiatives to promote evidence-based policy-making ought to be broad, or whether they should focus on strengths (see Venezuela’s Social side) or weaknesses (see Guinea’s Governance side). Another useful aspect of the diagrams is that surprises and regularities jump out: why does DRC perform that much better than Uganda on the ‘knowledge’ side of things, while having almost half the overall score? And what is the basis and implications of the better social scores seen in Latin and South American countries compared to West and East African countries?

One of the main challenges emerged, as always, in applying this to the ‘real world’. We knew that the index would only provide a very rough and ready indication of the complex and dynamic context within which think tanks work – which can be very different for think tanks working in different sectors in the same country, and different even in the same sector at different times. So we hired local experts who really understood the think tank scene in each of the TTI  focus countries assess whether the index results were valid and to provide additional detail specific to the think tanks that the evaluation was looking at.

This is where things started to look less promising.

Firstly it was hard enough for us, the designers, to keep on top of the assumptions we were making as we went along, as well as the methodological leaps we had made – the local experts found the results extremely difficult to interpret. As with all indices, there is lack of congruence between the aggregate data we are trying to confirm, and the messy, fuzzy picture in reality..

Secondly, as was pointed out half way through the process by our colleague and advisor, David Booth, who has spent a career studying the multiple political economy dimensions of many countries, most recently in the Africa Power and Politics Programme many of the indices used to develop the KSI are based on a mix of normative and empirical perspectives. There is, therefore, a built-in bias toward democratic process and a smothering of potential enabling factors. Think tanks can, for instance, still be highly effective in constrained authoritarian environments – look at the Vietnamese Academy for Social Sciences and this blog has reported several other examples from Chile during the Pinochet regime and China. The counter argument would be that these latter environments could be considered ‘pressure cookers’ in which there is limited space for new, innovative or dissenting think tanks.

Finally, as is the case with most indices, one can shine the torch on the wall – but not move it about. In other words, we are heavily constrained by the availability of data. For the most part, many interesting sub-indices have to be dropped because only a small selection of countries globally have signed-up to capturing and releasing them. In addition, as Enrique points out in this article, there are all sorts of critical dimensions that are, at best, not well captured in datasets. This may include the availability and vitality of domestic funding for think tanks, or a myriad of individually minor – but collectively major – obstacles, such as exchange rate fluctuation, tax legislation, no-go topics, and think tank procurement policy and capacity. Based on this, we also retrospectively realise that the conceptual framework does not take account of potentially positive and negative feedback loops which each of these dimensions might bring to bear on an enabling environment for think tank functions.

Nevertheless the combination of the work that went into assembling the index for each country, and in making sense of the underlying indexes themselves, and in particular the additional work done by the local experts, did provide the evaluation team with a very rich understanding of the contexts within which the think tanks are working. While generally helping to explain the different ways in which think tanks have evolved in different countries, it did not really help to explain any differences in the impact of the initiative itself on different think tanks – partly at least because it is only 4 years into a 10 year programme,

Ultimately, the Knowledge Society Index is an interesting way to take a micro-perspective on different enabling factors for evidence-based policymaking – it is based on a lot of assumptions, but at least it is empirical. When you get to the coalface of reality though, there might be a contradiction between what the score is saying and the messages you pick up from key informants. This leaves us with an awkward catch-22: in this scenario you might say that the index is only partial, and that you need all the extra data and intangibles to complete the picture. This is fine – but what happens if the index matches the reality on the ground? Do we say that the index confirms reality, and that all that extra fuzzy stuff doesn’t count in this case? How – if at all – do we get around this?

Is it therefore even useful to have a ‘Knowledge Society Index’ – or similar?