REPORT

San Francisco Generative AI Guidelines

Top 5 Guidelines for Exploring with Generative AI

  1. You’re responsible
    Whether created by AI or a human, you are accountable for anything you use or share.
  2. Use secure tools
    Copilot Chat is approved for City use and available to all staff. Avoid public or consumer AI tools unless formally vetted — never enter sensitive or City data into them.
  3. Always check the output
    AI isn’t always right. Review, edit, fact-check, and test everything it generates.
  4. Be transparent
    Disclose AI use in public-facing or sensitive work. Record tools in the City’s 22J inventory and notify anyone directly impacted.
  5. No deepfakes
    Do not use AI to create fake images, audio, video or other content that could be mistakenly interpreted by someone to be real.

Introduction & Scope

Enterprise Generative AI (GenAI) tools procured and licensed by the Department of Technology (DT) are now available for use by staff of the City and County of San Francisco (City), opening new opportunities to improve the effectiveness, efficiency, and responsiveness of City services for all San Franciscans.

These guidelines are designed to help City staff use GenAI tools effectively and responsibly, while maintaining public trust, protecting resident data, and preserving the integrity of City systems.

With some important distinctions, the guidelines outlined apply to both:

  • Enterprise GenAI tools—like ChatGPT Enterprise, Microsoft Copilot, Snowflake Cortex, Adobe apps and Express, and other approved systems—licensed and managed through the Department of Technology (DT). These tools have been procured and configured for City use and allow use of sensitive City data (and restrict any vendor use of City data for AI training).
  • Public or consumer GenAI tools —While the use of public or consumer GenAI tools for City business is strongly discouraged, we recognize that such tools may still be used in limited circumstances. If you want to use public or consumer GenAI tools, you must obtain prior departmental approval.

City Guidelines for Generative AI Use

The GenAI uses outlined below are grouped by risk level, each with corresponding mitigation strategies and disclosure requirements.

As a general rule, City employees must always thoroughly review, edit, fact-check, validate, and/or test their output, as applicable. You are ultimately responsible for any content you use or share.

Low-Risk Use

Internal Efficiency Tasks Performed Using Enterprise Generative AI Tools

You may use City-procured AI tools for:

  • Drafting internal emails, memos, or communications.
  • Creating summaries of meetings, documents, or reports.
  • Writing, editing, or debugging code.
  • Generating outlines or first drafts of internal materials.
  • Improving language access between the general public and City staff.

These uses help improve efficiency and reduce workload, but you remain the expert reviewer.

Safeguards and Responsibilities

AI can make mistakes or include biased or outdated information. To use it responsibly:

  • Only work with content you know well so you can spot errors.
  • Always fact-check and verify links and sources.
  • Review and edit AI output before using or sharing it.
  • Only use AI for coding if you know the language and can test the code.

Disclosure

No disclosure is needed for internal drafting, but you are responsible for all content you use, including AI-generated errors.

Medium- to High-Risk Use

Public-Facing or Sensitive Work Performed Using Enterprise Generative AI Tools

Use extra caution and follow additional steps when City-approved AI tools are used to perform tasks affecting public communication, services, or decisions such as:

  • Drafting or translating public-facing content.
  • Drafting interview questions and screening materials for hiring processes.
  • Summarizing policy-related data.
  • Supporting decisions related to services, enforcement, or eligibility.
  • Contributing to documents that affect regulation or safety.

For these use-cases, AI can serve as a support tool, but it should never make final decisions that affect individuals or public outcomes.

Safeguards and Responsibilities

  • Only use GenAI if you have deep subject-matter expertise to review its output.
  • Align outputs with the City’s values, equity goals, accessibility and ethical standards.
  • Actively monitor for instances of bias and correct them manually.

Disclosure

  • Use of AI for public-facing or sensitive work must be documented through the 22J process.
  • Notify affected individuals when AI substantially contributes to a work product. Notices must include:
    • Statement that GenAI was used
    • Tool name/version
    • Confirmation of staff review
    • Contact info for questions or corrections
  • Cite AI like any external source when quoting or paraphrasing its output.
  • Always verify and cite original sources—not just AI summaries—when referencing third-party content.

Prohibited Uses

To protect public trust, safety, and ethical standards, do not use GenAI tools for any of the following:

  • Relying on AI to create City official documents or make decisions without expert human review.
  • Generating images, audio, or video that could be mistaken for real people (including public officials or members of the public).
  • Creating “deepfakes” or impersonations of any person or official—even with disclaimers.
  • Fabricating fictional survey respondents or public input for research or outreach purposes.
  • Relying on AI to review legal or regulatory issues.

Data Protection Requirements

The use of City data in Enterprise AI tools is subject to the following restrictions:

  • For Copilot Chat and Snowflake, you can use Level 4 data and below (Levels 1–4).
  • For ChatGPT Enterprise, you can use Level 3 data and below (Levels 1–3).
  • Only use PHI (Protected Health Information) in tools that have a BAA (Business Associate Agreement) in place, such as Copilot Chat and Snowflake subject to your department’s approval. To verify which types of department-specific data are permitted for use with City-approved tools, always check with your Department. 
  • Do not enter any sensitive or protected data including personal information, health information, and/or financial information into public or consumer AI tools not provisioned or approved for City use.

Development of Guidelines, Versioning and Contact

The Emerging Technology Team developed these updated GenAI-focused guidelines in close coordination with the City’s AI Advisory Committee, a staff working group that provides guidance on the adoption, governance, and ethical use of emerging technologies in the City.

The AI Advisory Committee will regularly update the City Guidelines for Generative AI Use to reflect new law, regulations, lessons learned from application, and developments in GenAI technology. Check these Guidelines regularly for updates, and bookmark or subscribe to stay informed.

For questions or help with tool selection, training opportunities, or policy interpretation please check with the Emerging Technology Team at ai@sfgov.org

The full print version is linked below: