AlgorithmWatch’s guidelines to use generative AI responsibly
Whether you use ChatGPT, Claude or Gemini, Copilot or Perplexity – generative AI poses massive problems: many results are inaccurate and politically problematic, the systems’ energy and water consumption is enormous. At the same time, they have become an integral part of everyday life. AlgorithmWatch has developed guidelines to help use generative AI responsibly.

As an organization, we fight against the irresponsible and unaccountable development, deployment, and use of digital technologies. But such technologies can also, when used responsibly, aid us in this mission. Generative AI is a particularly important example, which raises questions of how we act responsibly and balance benefits against risks.
This publication introduces the principles and processes of our policy. We hope it may provide a useful model for other organizations considering how they should use generative AI responsibly, balancing useful cases with the risks of these technologies. Developing and implementing such a policy is challenging, given the range of use cases, risks/benefits, and views on generative AI – many of which change rapidly.
Our approach started with a survey of our staff to establish (i) beneficial use cases they find from generative AI and (ii) concerns and risks they see around the use of generative AI for AlgorithmWatch’s work. We then developed a policy designed to guide individual staff members as they make decisions about whether, and how, to use generative AI in a way that aligns with our values and mission.
This is based on 4 principles:
- ⚖️ Proportionality
- 🗝️ Security
- 💯 Quality
- 🪟 Transparency
The policy also incorporates a structured process for collecting and discussing use cases and tools as well as updating the policy over time, which is necessary to address the range of uses and ongoing changes in the technology, its benefits, and its risks.
If you plan to adopt a similar policy in your organization, we would be delighted if ours can provide support or a model. From our experiences and discussions so far, we can say…
- … it is good to begin with a survey of current uses and attitudes in your organization, followed by sharing these results (even as an aggregate) with your staff. This helps ensure your policy reflects the range of views and uses in your organization and to show that to everyone. We include the survey we used as an appendix in the below document.
- … our policy provides guidelines for individual decisionmaking, which helps to be flexible and adapt to many use cases. However, this only works if you are confident that your staff are concerned about risks of generative AI and want to use it only in ways that balance risk and benefit. Otherwise, clearer and stricter rules may be a better approach.
- … it is good to have a clear way to adapt over time as you implement it and as circumstances change.
Get the full policy
Join our newsletter and download the complete policy, including our survey questions and a transparency note your organization can adapt for its own responsible AI strategy.
If you already signed up for our community newsletter but want to download the full policy now, please sign up through the form anyway. Once you confirm your subscription, you will be able to download the file on the confirmation page.
We do not present this as a complete product – we are implementing and testing this policy and learning as we do so. We would be interested to hear from other organizations making similar efforts. You can reach us at info@algorithmwatch.org.
Our guidelines on the use of generative AI
AlgorithmWatch has developed a policy on how we use generative AI. As an organization, we fight against the irresponsible and unaccountable development, deployment, and use of digital technologies. But many such technologies can also, when used responsibly, aid us in this mission. Generative AI is a particularly important example, which raises questions of how we act responsibly and balance benefits against risks.
Generative AI is a class of tools that take user inputs and create new content. This includes generating text or other media outputs based on a “prompt” or translating between languages, writing styles, and media. In what follows, generative AI should be interpreted broadly – to include, for example, AI services that translate between languages or transcribe voice to text.
This document describes internal principles and current practices that we are in the process of implementing, relating to the use of generative AI. It is intended for informational purposes and does not constitute legally binding commitments or guarantees, nor does it replace or extend AlgorithmWatch’s other documentation (e.g., our privacy policy).
This policy draws on a survey of our staff (May 2025) to establish (i) beneficial use cases they find from generative AI and (ii) concerns and risks they see around the use of generative AI for AlgorithmWatch’s work. The resulting policy:
- provides internal guidance to staff on how to use generative AI in a manner consistent with AlgorithmWatch's values, based on 4 principles: 🪟Transparency, 💯 Quality, ⚖️ Proportionality, and 🗝️ Security.
- specifies how we will monitor and update various aspects of this policy based on what we call the GUIDE document and process.
- is published in a summary form (which you are currently reading) to indicate how we, as an organization particularly concerned with the challenges of deploying technologies responsibly, are considering our own usage of generative AI.
The below text outlines the 4 principles and the GUIDE process for updating aspects of this policy. If you wish to adapt or adopt this policy, we welcome this and strongly advise beginning with surveying your staff to establish their current uses, needs, views, and concerns.
Join our newsletter and download the complete policy, including our survey questions and a transparency note your organization can adapt for its own responsible AI strategy.
It is also important to note: This policy is designed for organizations that want to use technology in a responsible and ethical way, even when this involves limiting the use of technology, where this is embedded in the organizations' values, and where staff are clearly aware of and follow these expectations. The policy does not specify “hard rules” that constrain irresponsible behavior or staff members who wish to use tools with no or minimal safeguards. Rather, it supports staff in making individual decisions about their generative AI use in responsible ways by providing principles and a process for creating discussion, precedents, and ever-expanding guidance. We believe this is the most appropriate way for responsible organizations to respond effectively to the very broad range of use cases, user needs, and evolving situations around generative AI.
Principle 1: ⚖️ Proportionality
We strongly discourage staff from using generative AI simply because it seems the easier option when other appropriate options are available. Overuse of generative AI is associated with a series of systemic risks; from de-skilling of individuals, to reducing demand for jobs, to increasing energy demands, to companies' referring to high usage rates as justifications for reckless behavior.
However, our survey also showed that staff members do find substantial benefits to generative AI in some use cases. Staff also noted the importance of being inclusive with our policies, and different staff members have different needs; some use cases that are “fairly useful” for one staff member may help another overcome significant barriers.
“Proportionality” is a way to respond to this need for balance. Proportionality means we encourage staff members to reflect on, and justify, why they are using generative AI for a given use case rather than a “non-generative AI” approach.
We ask staff to internally reflect on their uses – and in some cases explicitly spell these out for discussion. These cases are:
- When writing a Transparency Note (see the 🪟 Transparency section later).
- When encountering a use case that raises challenging personal decisions about whether this is proportional or not.
- Any cases we believe are very hard to justify in general as “proportional” for AlgorithmWatch and should be avoided.
The GUIDE document will be used to collect these cases for wider discussion. Over time this will develop into a series of agreed-upon precedents and guidance for staff in assessing the proportionality of their own use cases. Unless and until such discussion provides further guidance, staff should follow their own judgment and/or get input from their Team Lead.
Principle 2: 🗝️ Security
What information we are comfortable inputting into generative AI tools was one of the major concerns expressed in our survey. Data input into generative AI tools may be stored and potentially used for further training of models, with associated privacy, confidentiality, and undue appropriation concerns. The use of data for training can also raise concerns about “leakage” of input data to other users, use of uncompensated labor for training, and other concerns.
Some tools promise increased security and/or not to use data for training, though sometimes only under certain conditions (e.g., paid-for versions). While these promises may provide some additional accountability, we should also be cautious of relying too heavily on these promises as a true safeguard of security, given the periodic failures of technology companies to protect data.
We therefore describe three ‘Tiers’ of content that staff might input into generative AI tools.
- Tier 1 (Low security): Content that does not raise any specific issues when being entered as an input into a generative AI tool, even with few/no safeguards.
- E.g., publicly available information.
- Tier 2 (Medium security): Content that we feel comfortable about inputting into specific generative AI tools on the basis that they will apply sufficient privacy/confidentiality safeguards, but that if these safeguards were to fail, it would not raise high risks, e.g., exposing confidential or personal information.
- E.g., internal AlgorithmWatch strategy documents that do not contain sensitive information.
- Tier 3 (High security): Content that should not be input into generative AI tools, even if they promise safeguards.
- E.g., personally identifiable information
Staff should consult internal records to see (i) what sort of content falls under what Tier, and (ii) what tools are recommended for Tier 2 information. They should then choose their tool and adjust input information accordingly (e.g., remove some material).
Where there is not clear enough existing guidance, staff should flag this to a Team Lead who can, depending on the circumstance, provide a temporary decision or escalate to other Team Leads or other relevant expertise (e.g., the Data Protection Officer) as required. These temporary decisions are recorded in the GUIDE, discussed, and a firm decision is recorded as future guidance.
Staff questions, requests, and suggestions related to tools that may be (in)appropriate for Tier 2 content should likewise be recorded in the GUIDE.
Principle 3: 💯 Quality
Any generative AI output should be reviewed critically before use. Staff should expect the outputs to require editing or otherwise questioning in some form – accepting the outputs “as they come” is likely to show a lack of critical engagement, and if staff are doing this, we strongly encourage them to consider whether they are doing sufficient quality assessment.
Quality assurance should go beyond simple fact-checking and also consider, for example:
- whether generative AI is “framing” the way we think about topics by directing our attention to some things and away from others;
- whether certain perspectives are given too much/little weight or are missing outright from the generative AI outputs;
- reading citations to check the generative AI has appropriately used them;
- if the output aligns with previous AlgorithmWatch work on the topic, including in tone and writing style.
Generative AI should not be used to produce material on a topic without the author(s) and editor(s) having or gaining additional familiarity with that topic using other methods not involving generative AI. That may be through existing expertise, contacting relevant expert(s), and/or non-generative AI-assisted research.
Where practical, given resource constraints, try to involve at least one fellow staff member in this check process (even if as simple as explaining what steps you have taken).
Staff are encouraged to (i) summarize safeguards used in specific cases in 🪟Transparency Notes and, as such, record them in the GUIDE and (ii) record broader ideas or reflections on quality assurance in the GUIDE.
Principle 4: 🪟 Transparency
In order to hold ourselves accountable for the other principles in this policy, it is important that we are transparent with ourselves and others.
When we publish material in which generative AI played a substantial role in creating the product, we discuss whether to include a Transparency Note explaining how generative AI was used.
At staff discretion, similar Transparency Notes may also be appended to work that is not published (such as documents for internal use or for partners or in documentation about systems used for internal operational purposes) if generative AI played a substantial role in their production.
These notes should be copied into the GUIDE as they provide valuable insights into how the principles are being applied in practice.
Examples of what might count as “substantial”, or not, are listed in internal records and iterated over time via the GUIDE process. These are meant as guidance and precedents for individual decisions; again, hard-and-fast rules are extremely challenging to define given the range of possible use cases.
For guidance, a couple of examples of “substantial” uses – i.e., those that should prompt a staff member to consider a Transparency Note – include:
- Publications in which generative AI was used to draft large portions of the text (as a rough guide, entire paragraphs or more), even if this “base text” was subsequently edited. This should be considered in context and not applied to use cases such as, for example, short social media posts, or if the first version was written by a human in one language and generative AI was used just for translation – such outputs should still be checked for quality, but a Transparency Note would not be recommended.
- Using generative AI for developing ideas or selecting research routes that informed the direction of sections of, or all of, a product or project. Finding or checking specific details would not generally be substantial enough (though it should still have quality assurance). For instance, prompts like “list all CEOs of the GAFAM companies with dates” are specific and can be easily checked, so this would not generally need a note. But using a chatbot in a back-and-forth discussion about “what are some key developments in the history of GAFAM” would need a note, as the chatbot’s selection of “key” points could shape the direction of the product.
The notes need not have a standard format, but referring to the other ⚖️🗝️💯 Principles will usually be helpful. The note should be brief and need not be extensively detailed – you do not need to list your precise prompts, for example. But there should be an internal copy of the note with your name and contact information for anyone who wishes to know more (we do not include this personal information in material we publish).
Join our newsletter and download the complete policy, including our survey questions and a transparency note your organization can adapt for its own responsible AI strategy.
The GUIDE
There is an internal GUIDE document, accessible to all staff, which contains:
- 🪟Transparency Notes (which in turn also record decisions related to all the other 3 Principles ⚖️ 🗝️ 💯).
- Examples related to ⚖️ Proportionality, in particular (i) common use cases that we consider generally inappropriate and (ii) difficult cases.
- Decisions about 🗝️ Security (what Tier particular content fits into and/or how safe certain tools are).
- Discussions of what does and does not count as “substantial use” and therefore require a 🪟Transparency Note.
Inspecting and discussing the GUIDE will be a standing item in the AlgorithmWatch Team Leads meeting, which happens approximately once per month. Decisions made by Team Leads can then be recorded in the GUIDE. Objections to decisions from staff will be taken back to Team Leads, either in the Team Leads meeting or (if more urgent) by discussion via internal messaging. In case of disputes among Team Leads that cannot be resolved by discussion, final decision-making falls to Executive Management. Documentation can also be supported by regular internal capacity building sessions on the application of our policy.