The promise and perils of AI in shaping tomorrow’s think tanks, policymakers and foundations

5 February 2024

The advent of artificial intelligence (AI) has fundamentally transformed various sectors, including healthcare, finance, entertainment, and more recently, the realm of think tanks, evidence-informed policy-making, and philanthropy. As we tread further into the era of AI, a mixed tapestry of potential, opportunities, and inherent challenges unfolds, especially for organisations striving to shape global narratives and make informed decisions.

Inspired by a call from Geoff Mulgan (at the European Think Tank Conference) to embrace imagination when thinking about the future, I decided to explore how I think the world of evidence-informed policy could change if we embraced AI.

I tried to imagine how different actors in the field would do things differently. My first draft is a series of not very coherent images of the future. I used this first draft to develop a more coherent series of cases. To prove my point, I used ChatGPT for this.

My train of thought

Individuals will have AI assistants and avatars. An assistant will be like Alexa or Siri. It might be them, in fact. But AI assistants may be bespoke to a person or organisation. AI avatars are the expression of an individual beyond their physical selves.

On the future of funding

 AI will be able to develop theories of change to inform the development of strategies and portfolios, map all actors and their positions in the ecosystem, identify current trends and future scenarios, identify the best-placed actors to lead an intervention or portfolio, decide what is the best type/size of support they need.

AI will be able to connect one funder’s strategy with other funders’ strategies – maybe through funders’ AI assistants. It will find complementarities and gaps!

Funders will be able to consult the AI avatars of well-informed individuals in the sectors or places where they want to intervene.

AI could assess the quality of the work of researchers and think tanks by assessing their published content in real time. It would/could advise how to complement research grants with capacity-building efforts and find the best providers or learning opportunities for a funder’s grantees.

AI will monitor changing costs or emerging risks and shocks and respond accordingly to support its grantees. Maybe, the funder’s AI assistant will be in constant conversation with grantees’ AI assistants.

Funders will be able to explore counterfactuals to their strategies by creating parallel scenarios and constantly comparing them to their own.

Funders will find it easier to focus on relations with partners as information needs will be handled by AI assistants. Personal or institutional.

On the future of research

AI will play a role at each step of a research project or programme.

AI will become a personal companion for researchers. They will be able to consult the literature in real time – and in multiple languages – helping to find gaps and opportunities.

An AI assistant be able to will design research projects, including suggesting the best possible team arrangement and team members’ profiles and budgets. It will act as a project manager keeping track of tasks. It will be the research assistant conducting reviews of existing knowledge, organising activities, scheduling calls, etc.

In fact, the AI assistant may be able to communicate with funders’ AI assistants to find the necessary funding!

In many cases, quantitative and qualitative work will be conducted by AI. AI will be able to create samples of real people and survey them (and respond exactly as they would have responded – or more truthfully). If data on them already exist, the assistant will use it.

AI will produce a synthesis of the data according to different parameters or lines of inquiry suggested by the researchers. It may even be able to compare the differences between findings using different analytical frameworks – just for the sake of it.

AI could keep up with other AI-aided research projects to alert researchers to any early findings that may affect their own results or analysis. Or even aggregate data!

On the one hand, research teams will no longer need some roles – e.g. assistant – but junior researchers could benefit from senior AI avatars as mentors!

For think tanks, research will need to pay more and more attention to human-to-human interaction. A large part of the content will be shared in a) smaller bites (data, analysis) and 2) deeper bites (theory, reflection, conceptual). But bites always.

On the other hand, academics would focus more and more on academic writing. It will not matter if they are ever read by humans. They will be written for robots. And it will be very easy to get citations (by robots) to assess impact.

Teaching will be increasingly unnecessary. AI assistants or AI avatars can also teach and tailor the education experience to an individual. The lecture will become a seminar. Top universities will be able to afford to provide human supervisors. But it will be possible to get Bill Gates or Einstein to be your supervisor. An AI supervisor.

To stand out, researchers will either need to “fool the robot” (they will need to be unexpected, unique, break with trends or consensus) or become preferred sources for AI (“Who is trending?” will be possible to ask in research, and the AI assistant will tell you).

Experts will have an AI avatar. It would be them but with all the information they cannot remember.

Their AI avatar will know everything they have published or said or discussed. (Thought? Maybe in the future; Seen? Heard? This is doable today). It will also know what their trusted friends and colleagues and peers are saying. And what those they disagree with are saying. It will also know stuff they do not know about their own areas/fields of expertise.

So researchers will always be able to check new ideas with themselves.

Experts (themselves and their avatars) will be experts, finally. They will be able to recall everything. A paper written 30 years ago? No problem, their AI avatar will “know” it. “John, you say white now but 30 years ago you argued it was grey. The data suggest it is still white; or do you have a different argument?

AI avatars can also be shared. They can connect to other avatars. E.g. from a think tank team or the economics department at Harvard. Think tanks and universities and consultancies may limit who their staff connect to. It may be part of the job condition.

And AI avatars could also be retailed as AI mentors

or consultants for individuals or organisations. Can’t pay for our senior consultant’s time? No worries, get her avatar at a discount!

Universities may limit this to their students, clients, and patrons. Consultancies may buy well-known avatars from universities or think tanks to offer them to their own clients – at a very good price for the university!

Think tanks’ worth will be in both their people and their robots. Is it all about the people? It is all about people and their robots!

Early adoption will be uneven. Larger think tanks and universities will develop these before everyone else. Large consultancies before everyone else.

Small consultants and think tanks will need to rely on white-label AI applications. They will likely catch up in functionality but will be lagging in reliability, speed, insight, connectivity. Also, AI will work better where the world is better connected already. It will not know enough about Africa or Asia and places with less public information.

Oral traditions will also struggle at first, as most AI tools train with written content.

Branding would be all about content. Everyone would choose how to consume content and use its own “brand” or style guide. Tools like Canva or WordPress will allow users to produce their own personalised newsletters, websites, or AI assistants.

On the future of policymaking

 AI will replace the need to call a think tanker or a researcher when in need of advice – certainly, it will replace search!

An AI policymaking assistant will accompany policymakers everywhere. It will always be listening to pick up questions, challenges, opportunities, and trends and offer proactive and useful information.

It will allow politicians and policymakers to always know what the price of milk is or what the effect of a 0.5% increase in the interest rate will be on GDP estimates.

AI assistants will answer the first question. Policymakers may then go to the source – or not. An AI expert (whoever you want) will be available.

Policymakers will no longer only seek information and advice from think tanks and experts.

They will prefer to use for connections (and vice versa?) or the reputation of being associated with them.

Large governments may control the information that AI assistants can offer their clients/users. The US gov or China may create their own AI assistants. Smaller governments will rely on AI assistants from large consulting firms – and their algorithms.

Ideology will inform algorithms. It will either create, reinforce, or fight biases. But it will never be unbiased.

Better-connected AI assistants/avatars will be able to navigate biases better. Assistants (avatars) will know the values of their user (or themselves). They will consider evidence from that POV. But people will be able to ask for “a different point of view”.

Policymakers will not need focus groups that often. Rarely. They will get immediate estimates of public perception about most things.

Politicians will be able to survey their constituents seconds before a vote. “Mmm will they support me in the next election if I vote against this?”

Hard evidence will matter more? Less?

Will voters matter? Can AI decide who should win based on what it knows?

Personal AI assistants or avatars, like people, can learn and differentiate themselves. When hiring organisations will interview them too. What would her avatar do about this situation? When winning a contract, a client may want to know what prospective suppliers’ AI assistants know and would advise on typical issues.

Snapshots of the future

These ideas inspired a series of short snapshots of the future in which I try to illustrate these possible (and current) developments. Again, I used ChatGPT to help me produce them.

A personalised think tank for policymakers: Policymakers and AI assistants

Visualise a future where policymakers, grappling with overwhelming amounts of data and diverse opinions, turn to their AI assistants. In such a scenario, an AI assistant becomes invaluable, condensing vast streams of information into actionable insights. An assistant like this might extract from terabytes of research, the essence of a new urban housing policy, presenting policymakers with best practices, potential pitfalls, and stakeholder opinions, all tailored to their specific needs and preferences. This efficient synthesis allows policymakers to make well-informed decisions, ensuring their constituents’ welfare. Read the AI-generated case study: The AI-powered policymaker

Think nets: A web of researcher interactions

Researchers in think tanks often operate within a web of interactions, juggling multiple projects, stakeholders, and objectives. Imagine a future where AI intervenes, facilitating these interactions. An AI system could match researchers with similar interests, alerting them to overlapping projects or complementary skills. This way, instead of isolated efforts, think tanks can have a unified, more impactful approach to pressing global challenges. Read the AI-generated case study: The AI-enhanced think tank

Mentorship reimagined: AI avatars of leading figures 

One of the most intriguing applications of AI is its role in mentorship. Young students and budding researchers often struggle to find guidance, especially from leading figures who, due to constraints of time or geography, may be inaccessible. Enter AI avatars of these leading figures. With such technology, a student in a remote part of the world could “consult” with the AI representation of a Nobel laureate or a pioneer in their field, seeking guidance, feedback, and direction. This not only democratises knowledge but also reshapes the landscape of mentorship. Read the AI-generated case study: The AI avatar mentorship programme

The murkier waters: Consultancies, biases, and the threat of misuse

However, as with any potent tool, AI has its darker facets. Large consultancies, trusted by corporations and governments, might be tempted to leverage AI assistants to offer tailored advice to their clients. But what happens when this advice, deliberately or inadvertently, bears the imprint of bias? Consider a hypothetical where a global consultancy deploys its AI tool for a government in Rwanda. While AI might efficiently streamline processes and offer insights, it might also reflect the consultancy’s vested interests or inherent biases. Inadvertently, the country might find its policies swayed, not by its best interests, but by an AI’s underlying programming.

Furthermore, the threat amplifies when we consider the potential misuse by authoritarian regimes. If an AI avatar of a renowned researcher at a public university is misused to push forth propaganda or suppress dissent, the very ethos of academia and free speech is at risk.

Read the AI-generated case studies: 

Differentiating in the age of AI: A challenge for think tanks

In an AI-saturated future, where instant access to synthesised knowledge might diminish the need for traditional research, think tanks face the existential threat of obsolescence. In such a landscape, they must innovate to remain relevant. Perhaps, by focusing on areas where human intuition, empathy, and judgment still trump AI. Or, by ensuring that their research outputs and policy recommendations always uphold transparency, traceability, and ethics, reinforcing trust in an increasingly skeptical world. Read the AI generated case study: Think tanks pioneering in an AI-dominated landscape.

AI and the future of philanthropy

The potential of AI in philanthropy is enormous. Imagine a program officer at a New York-based foundation, aiming to develop an economic justice portfolio for the Global South. With AI’s assistance, he can dynamically allocate resources, monitor on-ground progress, assess the portfolio’s impact, and recalibrate strategies in real-time. Such AI-augmented decision-making not only ensures efficient fund utilisation but also amplifies the portfolio’s overall impact.

Similarly, envision a scenario where AI avatars of different research funders collaborate, ensuring that their collective efforts are synergistically optimised. This real-time, AI-driven collaboration can revolutionise philanthropy, ensuring that every dollar spent has the maximum potential impact.

Large public and public funders will be able to sift through hundreds of thousands of proposals and assess them blindly, offer feedback, suggest improvements and receive answers almost instantaneously – from researchers’ avatars! 

Read the AI-generated case studies:

In conclusion

The intersection of AI with think tanks, policymaking, and philanthropy heralds a future replete with possibilities. As AI assistants become omnipresent, offering synthesised insights, facilitating collaborations, and streamlining processes, the world of research and decision-making is poised for a seismic shift. However, with these possibilities come inherent challenges, from the risk of biases and misuse to the threat of obsolescence for traditional research institutions.

It is critical that we tread with caution, ensuring that while we harness AI’s immense potential, we remain vigilant of its pitfalls.