Making AI oversight you can rely on

SiRelations is an AI safety and research organization working to build reliable methods for human oversight of AI systems. We believe that as AI grows more capable, maintaining meaningful human agency becomes both more important and more difficult.

Contact us

Research-first approach

We treat human oversight as a systematic science, not an assumption. Our work develops empirical methods to measure whether humans can actually exercise oversight — not just whether they're nominally "in the loop."

Practical application

We translate foundational research into tools that organizations can use. Regulatory frameworks mandate oversight, but provide no methods to assess it. We're building those methods.

Policy engagement

We work with policymakers to communicate what our research reveals about the gap between oversight requirements and oversight capabilities.

Long-term orientation

We're preparing for a world where humans regularly collaborate with systems that exceed human capabilities. The measurement methods we develop today will become essential infrastructure.

Our work

We focus on three interconnected areas that together address the oversight challenge.

Research

Foundational investigation into how humans can maintain agency when working with advanced AI. We study discernment — the capacity to remain clear, responsible, and sovereign in AI collaboration. Our research identifies the cognitive and contextual factors that enable or undermine human judgment.

Measurement

Developing empirical methods to assess oversight capacity. Current regulatory frameworks assume humans can "correctly interpret" AI output and decide when to override — but provide no way to verify this. We build the assessment tools that make oversight measurable.

Policy

Translating research findings into actionable guidance for organizations and policymakers. The EU AI Act's Article 14 requires human oversight. Impact assessment frameworks identify risks. Our work addresses what these frameworks leave open: whether designated overseers can actually do the job.

Our values

1

Rigorous honesty

We report what we find, even when the findings are uncomfortable. If oversight isn't working, we say so.

2

Measurement over assertion

Claims about human oversight should be empirically testable. We build the methods that make them so.

3

Bounded scope

We deliver specific, measurable products — not open-ended consulting. Constraints create precision.

4

Long-term thinking

We're building for a future where human-AI collaboration is the norm, not the exception.

5

Public benefit

The challenge of human oversight affects everyone. Our research contributes to the broader ecosystem.

Our mission

Develop the science and methods for meaningful human oversight of AI systems — ensuring that as AI grows more capable, humans retain genuine agency in the collaboration.

Work with us

We partner with organizations preparing for AI governance requirements.

Get in touch