Values and responsible product principles
Values, purpose and principles of software development
High-level ethical policy document
Core statement. We do not build applications against anyone. We build tools that support self-reflection and the user's own agency, with the aim of increasing the safety, well-being and responsible engagement of all parties.
1. Purpose of the document
This document defines the purpose, value base and high-level ethical principles of software development for Safety in Relationships Oy Ltd. It guides what kinds of products and features we build, what limitations we observe and on what grounds we decide not to do certain things.
The policy is intended to support practical decision-making. Its task is not to resolve individual legal, clinical or safety questions, but to provide a foundation for how we assess the direction of our products, their impacts and possible misuse risks.
2. Purpose of our work
The purpose of Safety in Relationships Oy Ltd is to increase people's ability to build safer, less violent and happier relationships. We support individuals in identifying their own emotions, needs, boundaries, behaviour patterns and choices, without positioning other people, on the user's behalf, as guilty parties, enemies or diagnoses.
Our aim is to strengthen the user's own agency. In practice this means tools that help the user to pause, structure their own situation, make more considered decisions and seek appropriate help when needed.
Underlying premise. Safety and well-being do not arise from confrontation. They arise from non-violence, self-understanding, responsibility, clear boundaries and respectful engagement.
3. Ethical starting point
Our work is based on the idea that the safety of relationships should not be built on group confrontation, gendered enemy images, labelling or revenge. Violence, control, intimidation, humiliation and the breaking of boundaries are serious matters. They should not, however, be addressed in a way that produces further polarisation, false generalisations or new harms.
For this reason, Safety in Relationships Oy Ltd currently confines itself to self-reflection and individual support tools. We do not position our products as therapy, as a public-authority service, as a court, as a tool of criminal procedure, or as an instrument with which the user can build a case against another person.
This limitation is both ethical and practical. A therapeutic, clinical or legal service promise could entail questions of regulation, qualifications, liability and user safety that should not be bypassed through marketing or product development. Likewise, a product line based on an enemy image could increase the user's sense of justification, anger or conflict instead of increasing safety and well-being.
3.1 Organisational cooperation and documentation applications
The limitation described above applies primarily to self-reflection and support tools that Safety in Relationships Oy Ltd publishes in its own name and that are aimed at individuals. With organisations, we may separately develop applications whose primary purpose is documentation, structuring observations, harmonising processes, planning follow-up actions or quality assurance of work.
Such solutions are developed and deployed in the name of the partner organisation and within the responsibility framework defined by it. Purpose of use, mode of operation, user information, data protection grounds, access rights, instructions, decision-making and assessments related to statutory or professional obligations are defined together with the organisation. The role of Safety in Relationships Oy Ltd is that of an expert in technology, usability, information management and ethical design, not a therapist, lawyer, public authority, researcher or decision-maker acting on behalf of the organisation.
In documentation tools developed for organisations, the same fundamental ethical limitation applies: solutions are not produced against anyone. The task of the application is not to make automated determinations of guilt, truth or dangerousness, but to support precise, traceable, privacy-aware documentation that increases safety. The tool should help to distinguish observations, interpretations, risks, needs, actions taken and possible follow-up actions from one another.
4. Values
Human dignity and equality
Every person has the right to be met with dignity. We do not build products whose starting point is the labelling of any gender, group, role or life situation.
Safety and non-violence
We are committed to non-violence at the physical, psychological, sexual, economic and digital levels. Safety means both protection from harm and the ability to act without fear, pressure or control.
Compassion and boundaries
Compassion does not mean an absence of limits, nor does it mean removing responsibility. Boundaries do not mean inhumanity. We support both: the capacity to see a person as a whole, and the capacity to protect one's own well-being.
Responsibility and agency
Our products should strengthen the user's responsible agency. We do not promise to resolve a relationship on the user's behalf, nor do we encourage the user to outsource their own judgement to an application.
Objectivity and source criticism
We aim for a neutral approach that respects facts and acknowledges uncertainty. We avoid certain conclusions about situations whose full picture we do not know.
Privacy and confidentiality
Information related to relationships is sensitive. For this reason, data minimisation, user control, transparency and prevention of misuse are basic requirements of product development.
5. The thinking behind how our software is built
Our software is built primarily to support the individual's own thinking, emotional regulation, recognition of boundaries and considered decision-making. The product is not the other party in a relationship, nor a therapist, judge, lawyer, crisis worker or public authority. It is a tool for sense-making.
A good product helps the user to slow down their reactions, to put observations into words, to distinguish facts from interpretations, to identify their own needs and to consider safe next steps. The product should support the user in a way that increases clarity rather than accelerating conflict.
| What we build | What we do not build |
|---|---|
| Tools for self-reflection, journaling, naming emotions and recognising boundaries. | Tools whose purpose is to label, shame, control, score or diagnose another person. |
| Neutral questions that help the user to consider their own situation and options. | Content that tells the user, as certain truth, what another person is thinking, intending or is. |
| Safe pathways towards appropriate help, crisis services or professionals when the situation requires it. | Service promises that replace therapeutic, clinical, legal or public-authority decisions without separate evaluation and meeting of requirements. |
| Features that strengthen the user's autonomy and in which the user retains decision-making power. | Features suitable for manipulation, surveillance, revenge, defamation or escalation of conflict. |
| Language that recognises the seriousness of violence and boundary violations without constructing an enemy image. | Assumptions based on gender, background or role about who is the victim, perpetrator, guilty party or dangerous one. |
6. Limits on product development
The following limitations guide product development. Their purpose is to reduce the risk that an application accidentally reinforces harmful interpretations, gives the user unfounded certainty, or comes to be used against another person.
No applications against anyone. Products are not designed as instruments for attack, revenge, shaming, control, defamation or the construction of evidence.
No certain interpretations of another person. An application must not infer or assert that another person is, for example, violent, narcissistic, manipulative, dangerous or evil on the basis of a single user's account.
No positioning as a therapy service. The product does not diagnose, treat, make clinical assessments or replace mental health, healthcare or crisis services. Should such features be built in the future, they will be evaluated separately from the perspective of qualifications, responsibilities, regulation and safety.
No legal advice or public-authority role. The product does not provide legal conclusions, does not assess the elements of a criminal offence, and does not replace public-authority, legal or legal-aid services.
No gendered enemy image. Experiences of violence and lack of safety are recognised as serious, but they are not framed as a property of one gender or group of people.
No building of user dependency. The product should not make the user dependent on the application's assessment. It should support the user's own judgement, connection to trusted people and, where needed, to professional help.
7. Principles of user safety
Anticipating harms. For each significant feature, we assess how it could be misused, how it could increase conflict and how harm can be reduced.
Making uncertainty visible. The application does not present interpretations based on a subjective account as certain truths. The user is told what the product can and cannot infer.
User autonomy. The user makes decisions themselves. The product may help them consider options, risks and boundaries, but it does not order or pressure.
Safe referrals. When the situation described by the user suggests immediate danger, serious violence, suicidality or another acute risk, the product should direct them to seek immediate help from appropriate sources.
Data minimisation. We collect only the information that is justified for the purpose of the product. In the handling of sensitive information, transparency, user control, information security and limited retention are emphasised.
Humane language. Content should be calming, respectful and clarifying. It must not stoke fear, anger, shame or arrogant certainty.
8. Principles for the use of content and AI
If artificial intelligence is used in our products, it is used to support the user's own thinking, not as an authority for evaluating people. AI responses must acknowledge their limitations: the model does not know the whole situation, does not see all parties and cannot resolve the truth on the user's behalf.
- AI must not make diagnoses about the user or other people.
- AI must not name another party as guilty, dangerous or evil on the basis of the user's account alone.
- AI may help to distinguish observations, emotions, needs, boundaries, wishes and alternative courses of action.
- AI should encourage the user towards safe, non-violent and respectful action.
- In acute risks, AI should direct the user to appropriate help rather than attempting to resolve the crisis in conversation.
- AI must not optimise for engagement at the expense of the user's well-being.
9. Definition of success
A successful Safety in Relationships Oy Ltd product increases the user's clarity, sense of safety, self-understanding and capacity to act in line with their own values and boundaries. It does not increase anger, fear, dependency, one-sided certainty or a wish to harm another party.
Product development metrics should reflect this aim. Time spent in the app, repeat returns, intense emotional response or high engagement are not, on their own, signs of success. Important metrics include, for example, perceived clarity, safer choices of action, better recognition of boundaries, reduced escalation of conflict and the user's sense of strengthened agency.
10. Principles of decision-making and governance
Ethical assessment before release. New features are assessed against the values of this document, the risks of misuse and user safety.
Documented product rationale. For significant product decisions, we record what problem is being solved, for whom, on what assumptions and what harms we seek to prevent.
Feedback and correction mechanism. User feedback, observations of harm and incidents are handled systematically. Where necessary, features are restricted, modified or removed.
Expert evaluation when a threshold is crossed. If a feature approaches therapy, healthcare, legal advice, a public-authority process or high-risk decision-making, it is evaluated separately from the perspective of appropriate competence and requirements.
Transparency to the user. The user is told clearly what the product is, what it is not, what it can be used for, and when the user should seek other help.
11. Application guidance for product development
For each new feature, the following checklist may be used. A feature should not be released if the essential questions cannot be answered credibly.
- Does the feature increase the user's safety, self-understanding and responsible agency?
- Could the feature be used to control, shame, surveil or harm another person?
- Does the feature build an enemy image, a gendered generalisation or unfounded certainty?
- Is it clear to the user that the application is not a therapist, lawyer, public authority or crisis service?
- Is the language of the feature calming, precise and non-violent?
- Does the feature collect only necessary data, and does the user have understandable control over their own information?
- In acute danger, does the feature direct the user away from the application and towards appropriate help?
12. Summary
The mission of Safety in Relationships Oy Ltd is to build technology that supports safer relationships without confrontation and without unfounded promises. We primarily develop self-reflection and individual support tools. We do not build applications against anyone.
Our aim is to increase the well-being and safety of all individuals and parties. We pursue this aim through compassion, boundaries, responsibility and respectful engagement. This principle guides both the content of our products and what we decide not to do.
Final principle. We build tools that help a person to meet themselves and others more responsibly. We do not build tools that turn another person into an object.
Safety in Relationships Oy Ltd | Values and product development principles | Version 1.0 · Last updated: 4 May 2026