Like millions around the world, I spent the holiday period playing with ChatGPT .
It is a large generative pretrained transformer (GPT) language model developed by OpenAI. It is trained to generate human-like text using machine learning algorithms that analyse an enormous amount of data and learn the patterns and structures of the language. In other words, it works by predicting the next best word in a series of words.
I started with a simple, and obvious, question: What is a think tank?
‘A think tank is an organisation that conducts research and provides analysis and policy recommendations on a variety of issues, often related to public policy, economics, and political science. Think tanks can be independent or affiliated with specific political parties or ideologies, and they often work to influence public policy and decision-making through their research and recommendations. Think tanks may also engage in public outreach and education efforts, such as hosting events or publishing reports or articles. Some well-known think tanks include the Brookings Institution, the Heritage Foundation, and the Cato Institute.’
That is a pretty good definition, if you ask me. It picks up on think tanks’ affiliations (e.g. to political parties) and their multiple functions (e.g. education).
Immediately I tried a second question: What is evidence-informed policy?
‘Evidence-informed policy refers to the use of research evidence, along with other forms of knowledge and expertise, to inform the development and implementation of policies and programmes.’ Read more + The full answer continued: ‘The goal of evidence-informed policy is to use the best available evidence to inform decision-making, in order to improve the effectiveness and efficiency of policies and programmes and to achieve desired outcomes.
The process of evidence-informed policy typically involves the identification and synthesis of relevant research evidence, the analysis of the strengths and limitations of the evidence, and the application of the evidence to inform policy decisions. It also involves engaging with various stakeholders, including policymakers, practitioners, and the general public, to ensure that research evidence is considered in a transparent and inclusive manner.
Evidence-informed policy is an important approach to policymaking, as it helps to ensure that policies and programs are based on the best available evidence and are more likely to achieve their intended goals. It is used in a variety of fields, including health care, education, social policy, and environmental policy, among others.’
This is also rather good. The definition considers ‘other forms of knowledge and expertise’, mentions the ‘best available evidence’ and includes engagement and inclusivity! I would have liked a reference to politics, ideology and ethics; but most evidence-informed policy practitioners wouldn’t mention them either.
Then, at a Christmas party, a friend who works for a US Government contractor told me he’d be using it in a much more sophisticated way, instructing it to produce original content. He felt that ChatGPT could present a real challenge to our practice.
So, there and then (in the middle of the party) I opened ChatGPT on my phone and jumped into the rabbit hole.
This article describes my first reactions and thoughts about how technologies such as ChatGPT may impact think tanks, the evidence-informed policy field and the kind of consulting that OTT does. I’m sure these ideas will evolve as we learn more. And I look forward to hearing the reactions, ideas and fears of others too.
My first exploration of ChatGPT
I must confess that, at first, the results I got from ChatGPT concerned me. But they also excited me.
This strategy document for a new (fictional) think tank in East Africa is eerily good. The strategy was pretty good but it would need some detail and context to make it implementable.
The first outline for the evidence-informed policymaking course was almost as good as anything I would expect from a course delivered by a leading actor in this field. Of course, the outline was not perfect. It missed modules on power, ethics and personal skills – although when I suggested them ChatGPT incorporated them.
Next, I asked ChatGPT to come up with a definition and examples of knowledge translation in the Global South. We have been involved in a year-long research project on this topic for IDRC and I had the sinking feeling that ChatGPT would outdo us.
The definition and the examples (both fictional and real) were rather good. But when I asked ChatGPT to offer an analysis of the differences between knowledge translation in the ‘North’ and ‘South’ it struggled. It wasn’t able to think outside the box and consider, for instance, that much of what is written about both knowledge translation and international development is written from the perspective of the global North and is therefore biased to a view of the world in which the South is unique and homogenous; both very wrong assumptions.
I tried asking it to write me an article on the potential impact of artificial intelligence on think tanks. I found the result satisfactory content wise, but not style wise. I’m sure that better instructions could improve the style.
Next, could ChatGPT write an article setting the scene for the 2023 OTT Conference on think tanks and political uncertainty?
I was not satisfied with the first result. So, I opted for an interview format and the result improved considerably. This allowed me to explore issues from the macro to the micro; move from a description of general trends to how these affect think tanks and democratic institutions; double-down on misinformation; and ask for recommendations on specific strategies to pursue.
It felt like I was engaging with someone with lots of information but without purpose. ChatGPT had the information, I had the purpose.
Could ChatGTP fill this gap too? I tried by giving it a series of instructions with increasing level of detail about who I was and why I wanted the information. The result was surprising: the responses were tailored for each audience and its intentions.
At the end of this first engagement with ChatGPT I was left with three questions:
Is this knowledge or just information?
I found it frustrating at university when we were tested on our ability to memorise facts that we could easily find in the real world (even before the internet). You can’t possibly know everything. And information is constantly changing. Surely it is more valuable to learn and test our ability to find relevant information and to make sense of it for an intended purpose-to make a decision, solve a problem, make sense of the world around us and so on?
ChatGPT’s only propose is to ‘assist users by generating human-like text based on the input provided to me’. To what extent can it make sense of the information for an intended purpose?
Is knowledge, therefore, the combination of information and purpose? According to ChatGPT: Knowledge is a broad term that refers to an understanding of a subject or a skill that is acquired through learning and experience. It can encompass a wide range of information, including facts, concepts, theories, and principles, as well as practical skills and expertise. Knowledge can be acquired through various means, such as education, research, observation, and experience. It is considered a key component of intelligence and is often seen as a valuable asset in many fields and endeavours. Knowledge can be used to inform decision-making, solve problems, and understand the world around us.
Did I author any of this?
I did not physically write the content. But if I had not asked the questions or given the instructions these results would not exist. The order in which I asked the questions, requested further detail, suggested sources or analytical categories is unique to my thought process.
Thus, my purpose is reflected in the answers.
In asking these questions and giving these instructions I was in charge. I had the purpose. So, maybe, I did author this.
This will surely have implications on how research is produced and authorship is awarded in the future. It has implications for how graduate research papers are assessed by universities and how academic journals peer review prospective publications.
This technology can be used to create new legislation and policies. Will we need lawyers to write laws or contracts too? I find the possibilities endless.
What are the implications for our sector?
Certainly, the technology is good enough to merit being used by policymakers, thinktankers, consultants, journalists and others to get a first, quick and reliable answer. If I had to provide a client with a five-point memo on successful interventions to fight misinformation, I would try ChatGPT before attempting to comb through the internet, search through back papers from leading think tanks or even call an expert for input. One major drawback however is that the results do not include references to the source materials.
While ChatGTP does not seek to understand why I want to know something it can infer it if I provide the right information. The biggest challenge to our sector will come when artificial intelligence takes the initiative to find out why we want to know something. It could infer it from a bio or LinkedIn page or by trawling through our search histories, or email and social media accounts. Or it could just ask us, just as thinktankers and consultants would.
Sonia Jalfín from Sociopublico has been prophesying about the impact of artificial intelligence on think tanks for years. It seems the rest of us are now catching up.