Our work is limited to the condition of data as it exists at the time it is examined.
Within that scope, we assess whether the data is acceptable for its stated use in:
AI models
automated or algorithmic decision systems
This includes examining factors that affect whether the data can safely support those uses.
We do not:
design or improve data systems
clean, restructure, or enrich data
recommend tools, vendors, or architectures
advise on how issues should be resolved
We also do not provide opinions on business strategy, ethics, or intent.
Our role ends at verification.
Our determinations do not certify:
outcomes produced by AI systems
fairness, bias, or representativeness
legal or regulatory compliance
performance of specific models or vendors
Those considerations may rely on data quality, but they are outside our scope.
We verify the condition of data based on what is provided and declared at the time of examination.
Responsibility for:
how the data is used
how findings are acted upon
whether systems are deployed
remains with the organisation using the data.
This separation is essential to maintain independence.
All determinations reflect a specific moment.
If data changes after verification, the determination may no longer apply.
For this reason, determinations should be treated as a record of condition, not a permanent state.
Clear limits protect everyone involved.
They ensure:
findings are not influenced by implementation concerns
verification remains independent of delivery
results can be relied upon by boards, regulators, and external reviewers
By staying within a narrow scope, we make our conclusions stronger, not weaker.
We verify condition.
We document findings.
We do not intervene.