Over the past two years, I have sat in countless meetings where think tanks debated how governments should regulate artificial intelligence. I have read thoughtful policy briefs on algorithmic accountability, transparency, and ethical AI. Yet in many of these same organisations, a basic question remains unanswered: how are we using AI inside our own institutions?
This gap matters more than most leaders realise. Think tanks are not neutral observers of technological change. We are early adopters, translators, and amplifiers of ideas. If we expect credibility when advising governments on AI governance, we need to demonstrate that we have done our own institutional homework.
That is why every think tank now needs what I call an internal “AI Constitution”.
Not a legal document or a technical manual. A simple, shared framework that defines how AI tools can and cannot be used inside the organisation, and what values guide those decisions.
The credibility problem no one talks about
In my advisory work, I increasingly see think tanks rushing to “AI-proof” their external messaging while neglecting their internal practices. Staff quietly use generative tools to draft memos, summarise interviews, or analyse survey data. Junior researchers experiment with ChatGPT for literature reviews. Communications teams use AI to rewrite policy summaries for social media.
None of this is inherently wrong. In fact, avoiding AI entirely is neither realistic nor desirable.
The problem is silence. No shared rules. No disclosure norms. No guidance on sensitive data. No discussion of risk.
This creates three institutional vulnerabilities.
- Credibility Risk: It is difficult to argue for ethical AI governance externally while internally relying on opaque tools without safeguards.
- Legal and Ethical Exposure: Staff may unknowingly upload confidential interviews, donor-sensitive material, or unpublished research into public models.
- The “Junior Researcher” Trap: Perhaps the most subtle risk is to mentorship. If junior staff use AI to skip the “grunt work” of literature reviews and summarisation, they risk bypassing the very tasks that train them to think critically. We risk hollowing out our future capacity for the sake of short-term efficiency.
An internal AI Constitution is a way to address all three.
What an “AI Constitution” actually is
Think of it as a short, practical document that answers five core questions:
- What AI tools are allowed, discouraged, or prohibited?
- What types of data can never be uploaded into external systems?
- When should AI use be disclosed internally or externally?
- Who is accountable when AI-assisted work goes wrong?
- How do we ensure AI supports, rather than replaces, critical thinking?
It does not need to be perfect. It needs to be clear. In my experience, the most effective AI Constitutions are three to five pages long, written in plain language, and approved at the leadership level. They are living documents, revisited annually, not static compliance exercises.
How to draft an AI Constitution: A 5-step framework
- Start with Values, Not Tools. Before listing platforms or software, start with principles. This anchors the policy in institutional identity, not technology hype. Your core values might include:
- Human Accountability: A human must always be the final “author” and take responsibility for every word published.
- Originality over Consensus: We acknowledge that LLMs are trained on “average” data and consensus views. We use AI to challenge our thinking, not to replace the nuance of local context.
- Protecting the Learning Curve: We encourage junior staff to use AI to critique their drafts, not to write them.
The “Feature vs. Wrapper” Test. A common challenge is deciding which tools to allow when new ones appear daily. Whitelisting specific apps is a losing battle. Instead, evaluate the underlying engine and the Terms of Service using an “Enterprise Standard”:
-
- Data Retention: Does the tool use your inputs to train its model? (If yes, it should be banned for sensitive work.)
- Source Transparency: Does the tool cite sources? (Essential for research.)
- Security: Do we have an enterprise license that guarantees data privacy? By focusing on these criteria rather than brand names, your policy remains relevant even as the specific apps change.
- Map Real Use Cases. Do not guess how staff use AI. Ask them. In workshops I facilitate, I often discover that researchers use AI for translation, comms teams use it for tone adaptation, and programme staff use it to structure proposals. Mapping actual behaviour allows the policy to respond to reality, not fear.
- Draw Hard Red Lines Around Data. This is the most urgent section. Your AI Constitution should clearly state what must never be uploaded to public or external AI systems. This typically includes:
- Confidential interview transcripts (unless anonymised and on a secure, private instance).
- Personal data of research participants (GDPR/privacy compliance).
- Donor agreements and internal evaluations.
- Unpublished datasets or draft policy positions.
- The “Zero-Trust” Rule for Citations. For a think tank, a fake citation is fatal. Your policy must include a specific “Hallucination Clause.”
- The Rule: “Any claim, statistic, or citation generated by AI must be verified against the primary source by a human researcher.”
- We must treat AI as a fallible research assistant, never as an authoritative source.
- Define Disclosure Norms. One of the most sensitive questions is disclosure. Does staff need to say when AI was used internally? Externally? There is no universal answer, but there must be a shared one. Some organisations require disclosure when AI materially shapes analysis or wording (e.g., “This report was summarised with the assistance of AI and reviewed by the author”). Others limit disclosure to internal processes. What matters is consistency and honesty.
Lessons from early adopters
While many internal policies remain private, we can look to leaders in the field for direction. The Urban Institute, for example, has long set the standard for data privacy, employing strict protocols on how researchers handle datasets—principles that naturally extend to AI usage. Similarly, large institutions like The Brookings Institution have published extensively on AI governance, implicitly signalling that their internal rigour must match their external policy recommendations.
However, the most valuable lessons often come from internal adjustments. One mid-sized policy research organisation I advised discovered during an internal review that staff were routinely pasting raw interview transcripts into public generative AI tools to speed up coding and thematic analysis. While the practice was well-intentioned, it exposed sensitive stakeholder data and violated consent assumptions.
In response, the organisation introduced a simple rule: AI could be used for summarisation only after transcripts were anonymised and processed through a secure, non-retentive environment, with a mandatory human review step before insights entered any policy output. The rule was not framed as a ban but as a safeguard for quality and trust, and was quickly adopted across teams.
From organisations that have already taken this step, three additional lessons stand out:
First, staff welcome clarity. Most people are not trying to cut corners; they are anxious about doing the wrong thing. They want to know what is acceptable.
Second, the conversation matters as much as the document. Drafting the policy collaboratively builds internal literacy and trust. It forces teams to debate what “quality” means in the age of AI.
Third, funders notice. Increasingly, donors ask about data governance, digital security, and responsible AI use. Having an internal framework strengthens institutional maturity and proves you are a safe pair of hands for their data.
Why this matters now
AI governance debates are accelerating. Think tanks are central actors in shaping them. But credibility is cumulative. It is built not only through what we publish, but through how we operate.
An internal AI Constitution is not about control. It is about alignment—between values and practice, and between external advice and internal behaviour. If think tanks want to remain trusted brokers in the evidence ecosystem, we need to walk the talk. That starts at home
About the Author
Dr. Tony Bader is an AI Strategist and advisor to research institutes and healthcare organisations, specialising in the governance, ethics, and operational risks of artificial intelligence.