
AI policy
Last updated: January 2026
Chalkstream uses AI tools to support our research, consultancy and training. This client-facing policy summarises how we work.
Our approach
If used well, AI improves the quality of our work. It gives us additional capabilities, particularly in data analysis, strategic planning and training scenario development. It supports our quality control processes and a wide range of our business and administration processes.
But there is a trade-off. AI risks for businesses like Chalkstream include: using it in ways that are unacceptable to clients; producing poor quality work as a result of limited instructions or training data; failure to protect confidentiality; degrading our own skills through cognitive offloading.
This policy has been developed to guard against those risks while maximising the opportunities of AI for Chalkstream.
The fundamentals (non-negotiables)
We will always
-
Keep humans in the loop. Any AI output is treated as a suggestion and is reviewed, validated and edited by Chalkstream before use.
-
Use AI only where it is proportionate to the task and the risk. If the risk is high, we do not use AI.
-
Be transparent with clients. We reference this policy in proposals and include AI-use statements in reports.
-
Use approved tools with enterprise security controls where available.
We will never
-
Use AI for first drafts of client deliverables or client-facing communications. Our work starts with a human draft or human-led structure. AI may support review, stress-testing, sense-checking and editing only. We do this deliberately to protect quality and, in the longer term, our writing and thinking skills.
-
Use AI to communicate on our behalf.
-
Upload personal data, client confidential information, or Chalkstream confidential information into AI tools.
-
Use AI in ways intended to mislead.
Confidentiality and data protection
What we may use with AI (only):
-
Anonymised or de-identified information that cannot reasonably be used to identify an individual, organisation, or situation.
-
Synthetic or dummy data created for training exercises, provided it is not derived from real client material.
What we will never put into an AI tool. This includes, but is not limited to:
-
Any personal data (names, emails, phone numbers, identifiable job titles, recordings/transcripts that identify individuals, or free text that could identify someone).
-
Any special category data (e.g., health, disability, ethnicity) in identifiable or re-identifiable form.
-
Any client confidential information, including non-public strategy, board materials, draft positions, stakeholder intelligence, contract terms, pricing, internal performance issues, or procurement documentation.
-
Any crisis and issues work content, including incident detail, response options, vulnerability assessments, legal/HR sensitivities, media handling plans, or stakeholder management actions (even if partially anonymised).
-
Any credentials or security-related information that could increase cyber risk or data breach.
Anonymisation and pseudonymisation (standard)
Where AI support is permitted, we anonymise/pseudonymise before any input is shared with an AI system. We do this manually or using a small language model hosted on a secure local server. If robust de-identification is not feasible, we do not use AI.
Tools and security controls
Chalkstream uses enterprise-grade AI tools for permitted use-cases only (currently including Gemini, ChatGPT Enterprise and Claude).
Where tools provide settings to prevent training on prompts/outputs, those settings are enabled and used as standard.
New tools are evaluated through continuous experimentation and extensive secondary research before approval.
How we use AI in practice (examples)
We use AI only as supplementary support, and only when inputs meet the data rules above and outputs are human-checked.
Research
Permitted support uses can include: qualitative support (coding assistance, categorising open-text responses, thematic summaries, and formatting tables from anonymised/de-identified data); sense and error checking of anonymised summaries and executive summaries; summation of a small proportion of anonymised qualitative data, or scoped literature summaries (where compliant).
We do not use AI to produce first drafts of questionnaires, interview guides, discussion prompts, reports, or client recommendations.
Consultancy
Permitted support uses can include: structured analysis support for AI risk audits and governance work, where content is not client-identifiable and the thinking remains human-led; editing and sense-checking of human-drafted policy text where no confidential client detail is included.
We do not use AI to draft crisis statements, reactive lines to take, or scenario plans, and we do not upload client-sensitive context to 'improve' advice.
Training
Permitted support uses can include: drafting supporting learning assets (e.g., knowledge-check questions, alternative explanations, example prompts) after the core structure and learning objectives have been drafted by Chalkstream, using non-confidential inputs only.
We do not generate case studies that are recognisably derived from real client situations.
Quality assurance and accuracy
For any permitted AI use:
-
We check all outputs before use, looking at factual accuracy, logic, internal consistency, unintended bias and completeness.
-
No AI output is treated as authoritative without corroboration (especially statistics, legal assertions, or claims about organisations/individuals).
-
We spot-check quantitative analysis as standard and may use additional tools for quality control.
-
We guard against unintended bias through multiple checks, including spot-checks of AI-assisted coding and thematic analysis, and cross-validation where feasible.
-
AI is treated as an assistant, not the analyst of record. Chalkstream remains responsible for methods and conclusions.
Client transparency and choice
AI use on client work is by agreement, scoped to specific tasks, and governed by the controls set out above. Client agreement is obtained through our standard proposal terms.
Where AI has supported a deliverable, we will include an AI statement describing permitted uses (e.g., anonymised summation, sense-checking).
If you require a no-AI constraint for a project or workstream, we will comply and document the requirement.
Intellectual property
Our use of AI is human-focused and human-centred: all work begins with human drafting or structure, and all AI outputs are substantially reviewed, validated, and edited before use. This level of human intervention means that IP in our deliverables can be protected in the same way as any other materials we produce.
AI-assisted materials are assigned to clients with the same legal protections as any other Chalkstream deliverables.
Environmental considerations
Calculating the energy cost of AI usage is very hard. It varies depending on the model, the task, the required output, where the data centre is located and the time of day the query is made. Reasoning-intensive tasks are likely to consume significantly more energy than the reported average. The problem is exacerbated by the opacity of AI company energy reporting.
We do know, however, that advances in computing mean that the collective energy cost of Chalkstream usage is likely to be a very small fraction of our overall energy use as a company. Nevertheless, Chalkstream has a policy of never prompting a large language model if a viable alternative is feasible. This includes referring to historical conversations in our training (where confidentiality requirements allow) and other types of work. We also deploy a local small language model where viable in order to minimise energy use.
We are also acutely aware of the challenge of AI usage of water. AI's direct water use per prompt (in relation to the cooling of data centres) is negligible when compared with other human activities. Cutting power demand is the most effective way to reduce water use. The real risk is cumulative and contextual: while individual uses are trivial, large-scale deployment—especially in water-stressed regions—can create a meaningful aggregate impact. We keep a watching brief on the evidence, as it is currently inconclusive, but there may be a point where we decide to cease using AI for this reason.
Governance and incident reporting
This policy is owned by Chalkstream's Managing Director. All associates working with Chalkstream are required to follow equivalent confidentiality and data protection obligations. Any suspected data breach or accidental disclosure must be reported immediately and handled in line with Chalkstream procedures.
An iterative policy
Generative AI is evolving quickly. We keep this policy under active review and update it at least twice per year, and sooner where risk, law, tools, or client expectations change.
Contact
If you have questions about this policy or want to agree AI constraints for a project, contact Chalkstream via our Contact Us form.