AlgorithmWatch’s guidelines to use generative AI responsibly

Whether you use ChatGPT, Claude or Gemini, Copilot or Perplexity – generative AI poses massive problems: many results are inaccurate and politically problematic, the systems’ energy and water consumption is enormous. At the same time, they have become an integral part of everyday life. AlgorithmWatch has developed guidelines to help use generative AI responsibly.

As an organization, we fight against the irresponsible and unaccountable development, deployment, and use of digital technologies. But such technologies can also, when used responsibly, aid us in this mission. Generative AI is a particularly important example, which raises questions of how we act responsibly and balance benefits against risks.

This publication introduces the principles and processes of our policy. We hope it may provide a useful model for other organizations considering how they should use generative AI responsibly, balancing useful cases with the risks of these technologies. Developing and implementing such a policy is challenging, given the range of use cases, risks/benefits, and views on generative AI – many of which change rapidly.

Our approach started with a survey of our staff to establish (i) beneficial use cases they find from generative AI and (ii) concerns and risks they see around the use of generative AI for AlgorithmWatch’s work. We then developed a policy designed to guide individual staff members as they make decisions about whether, and how, to use generative AI in a way that aligns with our values and mission.

This is based on 4 principles:

  1. ⚖️ Proportionality
  2. 🗝️ Security
  3. 💯 Quality
  4. 🪟 Transparency

The policy also incorporates a structured process for collecting and discussing use cases and tools as well as updating the policy over time, which is necessary to address the range of uses and ongoing changes in the technology, its benefits, and its risks.

If you plan to adopt a similar policy in your organization, we would be delighted if ours can provide support or a model. From our experiences and discussions so far, we can say…

Get the full policy

Join our newsletter and download the complete policy, including our survey questions and a transparency note your organization can adapt for its own responsible AI strategy.

If you already signed up for our community newsletter but want to download the full policy now, please sign up through the form anyway. Once you confirm your subscription, you will be able to download the file on the confirmation page.

We do not present this as a complete product – we are implementing and testing this policy and learning as we do so. We would be interested to hear from other organizations making similar efforts. You can reach us at info@algorithmwatch.org.


Our guidelines on the use of generative AI

AlgorithmWatch has developed a policy on how we use generative AI. As an organization, we fight against the irresponsible and unaccountable development, deployment, and use of digital technologies. But many such technologies can also, when used responsibly, aid us in this mission. Generative AI is a particularly important example, which raises questions of how we act responsibly and balance benefits against risks.

Generative AI is a class of tools that take user inputs and create new content. This includes generating text or other media outputs based on a “prompt” or translating between languages, writing styles, and media. In what follows, generative AI should be interpreted broadly – to include, for example, AI services that translate between languages or transcribe voice to text.

This document describes internal principles and current practices that we are in the process of implementing, relating to the use of generative AI. It is intended for informational purposes and does not constitute legally binding commitments or guarantees, nor does it replace or extend AlgorithmWatch’s other documentation (e.g., our privacy policy).

This policy draws on a survey of our staff (May 2025) to establish (i) beneficial use cases they find from generative AI and (ii) concerns and risks they see around the use of generative AI for AlgorithmWatch’s work. The resulting policy:

  1. provides internal guidance to staff on how to use generative AI in a manner consistent with AlgorithmWatch's values, based on 4 principles: 🪟Transparency, 💯 Quality, ⚖️ Proportionality, and 🗝️ Security.
  2. specifies how we will monitor and update various aspects of this policy based on what we call the GUIDE document and process.
  3. is published in a summary form (which you are currently reading) to indicate how we, as an organization particularly concerned with the challenges of deploying technologies responsibly, are considering our own usage of generative AI.

The below text outlines the 4 principles and the GUIDE process for updating aspects of this policy. If you wish to adapt or adopt this policy, we welcome this and strongly advise beginning with surveying your staff to establish their current uses, needs, views, and concerns.

Join our newsletter and download the complete policy, including our survey questions and a transparency note your organization can adapt for its own responsible AI strategy.

It is also important to note: This policy is designed for organizations that want to use technology in a responsible and ethical way, even when this involves limiting the use of technology, where this is embedded in the organizations' values, and where staff are clearly aware of and follow these expectations. The policy does not specify “hard rules” that constrain irresponsible behavior or staff members who wish to use tools with no or minimal safeguards. Rather, it supports staff in making individual decisions about their generative AI use in responsible ways by providing principles and a process for creating discussion, precedents, and ever-expanding guidance. We believe this is the most appropriate way for responsible organizations to respond effectively to the very broad range of use cases, user needs, and evolving situations around generative AI.

Principle 1: ⚖️ Proportionality

We strongly discourage staff from using generative AI simply because it seems the easier option when other appropriate options are available. Overuse of generative AI is associated with a series of systemic risks; from de-skilling of individuals, to reducing demand for jobs, to increasing energy demands, to companies' referring to high usage rates as justifications for reckless behavior.

However, our survey also showed that staff members do find substantial benefits to generative AI in some use cases. Staff also noted the importance of being inclusive with our policies, and different staff members have different needs; some use cases that are “fairly useful” for one staff member may help another overcome significant barriers.

“Proportionality” is a way to respond to this need for balance. Proportionality means we encourage staff members to reflect on, and justify, why they are using generative AI for a given use case rather than a “non-generative AI” approach.

We ask staff to internally reflect on their uses – and in some cases explicitly spell these out for discussion. These cases are:

The GUIDE document will be used to collect these cases for wider discussion. Over time this will develop into a series of agreed-upon precedents and guidance for staff in assessing the proportionality of their own use cases. Unless and until such discussion provides further guidance, staff should follow their own judgment and/or get input from their Team Lead.

Principle 2: 🗝️ Security

What information we are comfortable inputting into generative AI tools was one of the major concerns expressed in our survey. Data input into generative AI tools may be stored and potentially used for further training of models, with associated privacy, confidentiality, and undue appropriation concerns. The use of data for training can also raise concerns about “leakage” of input data to other users, use of uncompensated labor for training, and other concerns.

Some tools promise increased security and/or not to use data for training, though sometimes only under certain conditions (e.g., paid-for versions). While these promises may provide some additional accountability, we should also be cautious of relying too heavily on these promises as a true safeguard of security, given the periodic failures of technology companies to protect data.

We therefore describe three ‘Tiers’ of content that staff might input into generative AI tools.

Staff should consult internal records to see (i) what sort of content falls under what Tier, and (ii) what tools are recommended for Tier 2 information. They should then choose their tool and adjust input information accordingly (e.g., remove some material).

Where there is not clear enough existing guidance, staff should flag this to a Team Lead who can, depending on the circumstance, provide a temporary decision or escalate to other Team Leads or other relevant expertise (e.g., the Data Protection Officer) as required. These temporary decisions are recorded in the GUIDE, discussed, and a firm decision is recorded as future guidance.

Staff questions, requests, and suggestions related to tools that may be (in)appropriate for Tier 2 content should likewise be recorded in the GUIDE.

Principle 3: 💯 Quality

Any generative AI output should be reviewed critically before use. Staff should expect the outputs to require editing or otherwise questioning in some form – accepting the outputs “as they come” is likely to show a lack of critical engagement, and if staff are doing this, we strongly encourage them to consider whether they are doing sufficient quality assessment.

Quality assurance should go beyond simple fact-checking and also consider, for example:

Generative AI should not be used to produce material on a topic without the author(s) and editor(s) having or gaining additional familiarity with that topic using other methods not involving generative AI. That may be through existing expertise, contacting relevant expert(s), and/or non-generative AI-assisted research.

Where practical, given resource constraints, try to involve at least one fellow staff member in this check process (even if as simple as explaining what steps you have taken).

Staff are encouraged to (i) summarize safeguards used in specific cases in 🪟Transparency Notes and, as such, record them in the GUIDE and (ii) record broader ideas or reflections on quality assurance in the GUIDE.

Principle 4: 🪟 Transparency

In order to hold ourselves accountable for the other principles in this policy, it is important that we are transparent with ourselves and others.

When we publish material in which generative AI played a substantial role in creating the product, we discuss whether to include a Transparency Note explaining how generative AI was used.

At staff discretion, similar Transparency Notes may also be appended to work that is not published (such as documents for internal use or for partners or in documentation about systems used for internal operational purposes) if generative AI played a substantial role in their production.

These notes should be copied into the GUIDE as they provide valuable insights into how the principles are being applied in practice.

Examples of what might count as “substantial”, or not, are listed in internal records and iterated over time via the GUIDE process. These are meant as guidance and precedents for individual decisions; again, hard-and-fast rules are extremely challenging to define given the range of possible use cases.

For guidance, a couple of examples of “substantial” uses – i.e., those that should prompt a staff member to consider a Transparency Note – include:

Join our newsletter and download the complete policy, including our survey questions and a transparency note your organization can adapt for its own responsible AI strategy.

The GUIDE

There is an internal GUIDE document, accessible to all staff, which contains:

Inspecting and discussing the GUIDE will be a standing item in the AlgorithmWatch Team Leads meeting, which happens approximately once per month. Decisions made by Team Leads can then be recorded in the GUIDE. Objections to decisions from staff will be taken back to Team Leads, either in the Team Leads meeting or (if more urgent) by discussion via internal messaging. In case of disputes among Team Leads that cannot be resolved by discussion, final decision-making falls to Executive Management. Documentation can also be supported by regular internal capacity building sessions on the application of our policy.