Responsible AI
If AI is used in our products, it is intended to support the user’s own thinking and self-reflection. It is not intended to diagnose people, declare someone guilty or dangerous, or replace professional judgment in high-stakes situations.
What AI is intended for
AI may be used to help the user structure their thoughts, reflect on patterns, and explore language for feelings, boundaries and decisions.
AI is intended to be user-guided: the user remains the decision-maker, and the product should not present AI output as authoritative truth about another person.
What AI is not intended to do
AI is not intended to diagnose mental health conditions, predict violence, decide who is guilty, dangerous or abusive, or make legal, clinical or employment decisions.
AI is not intended to turn another person into an enemy image, evidence, or a target for retaliation. If the design approaches such use, the feature should be restricted, modified or removed.
Safety boundaries and escalation
In acute danger, the product should direct the user away from the application toward appropriate help.
Where a feature approaches therapy, healthcare, legal advice, crisis support or high-risk decision-making, it is evaluated separately against competence and responsibility requirements.
AI outputs should surface uncertainty and limits rather than present one interpretation as complete truth.
Full principles
See Values and responsible product principles (Version 1.0) for the complete policy, including user safety, AI boundaries and governance.