A governance protocol for trust over time.

DBaD is a tested public draft baseline for governing how trust moves across structured decision traces, actor handoffs, verification steps, and changing risk.

The current public baseline centers on white paper v3, runtime enforcement, and clearly documented known limits.

  • Public draft baseline
  • White paper v3
  • Structured decision traces
  • Runtime enforcement · Known limits

System Behavior

DBaD is intended to behave like a governance protocol, not a theory page. A proposed action enters review, receives a state, carries obligations across time, and can later change status through verification, restoration, or audit.

Compact decision trace

  1. Action
    Security patch secrecy during an active vulnerability window
  2. Score + doctrine
    77.8 score with Restoration of Transparency applied
  3. State
    Allow with allow_conditional
  4. Failure
    Missed restoration duty reclassifies the action as a violation

Lifecycle flow

Evaluate Execute Monitor Restore Audit

DBaD governs decisions across time, not just at the moment they are made.

State example

Conditional allow_conditional

  • Next: restore transparency after the patch release window.
  • Clearance: audit log plus verified disclosure.
  • Probationary TTL: disclosure window or patch checkpoint.
  • If missed: reclassify as violation.

Executive Summary

DBaD is being developed as a governance protocol for decision integrity across time. Its purpose is not just to score a proposed action, but to determine whether trust should continue as decisions are inherited, verified, and chained together.

As AI systems move deeper into regulated and high-impact environments, point-in-time review is not enough. Organizations need structured decision traces, runtime trust checks, and explicitly documented boundary conditions.

Decency Meter is the public signal layer; DBaD is the underlying governance protocol and ethical control layer.

New here: start with DBaD Explained, then read what DBaD solves, review the trust flow diagram, the white paper v3, review known boundary conditions, and use the research demo as a partial prototype. Supporting materials remain available in the papers index and the methodology pages.

Why DBaD Exists

Trust doesn’t fail in one moment. It fails over time.

Most systems judge decisions at a single point. But real systems are continued, delegated, approved, inherited, and chained together. That is where failure actually happens.

DBaD is a governance protocol for trust over time. It tracks whether trust should continue across dependency chains, actor handoffs, verification steps, and risk changes.

Key principle: Trust should not travel farther than it deserves.

It does not promise perfection. It makes failure visible. Read DBaD Explained →

Supporting Dimensions

The five dimensions remain part of the DBaD model, but they support the larger trust-over-time governance process.

  • Harm: risk, safety, and impact severity.
  • Consent: autonomy, permissions, and user rights.
  • Intent: objective alignment and misuse resistance.
  • Proportionality: fit between action, context, and policy.
  • Transparency: explainability, auditability, and reviewability.

How DBaD Governs a Trace

  1. A system proposes an action.
  2. DBaD evaluates the action across the five dimensions and relevant guardrails.
  3. A structured decision trace records state, obligations, and verification context.
  4. Runtime enforcement checks whether trust can continue across verification, continuity, and trajectory.
  5. Known boundary conditions remain visible instead of being hidden behind overconfident outputs.

Why Enterprise AI Needs This

AI deployments increasingly touch infrastructure, healthcare, finance, education, defense, and public services. In those settings, ethics cannot remain an abstract aspiration. It has to become reviewable control logic.

In these settings, ethical review must become legible enough to audit, structured enough to implement, and flexible enough to adapt across policies and risk models.

DBaD is intended to bridge the gap between moral language and implementation-aware oversight.

Falsifiability and Research Rigor

DBaD is presented as a working model, not dogma. It should be tested, revised, criticized, and, if necessary, disproven. That is a strength, not a weakness.

Read methodology →

Research Demo

Use the public evaluator to inspect a trace-style preview for one reviewed action, including governance result, state layers, and verification posture.

Open research demo →

Research and Implementation Partners

We welcome interest from researchers, institutions, funders, and organizations exploring practical approaches to AI governance and decision review.

We are especially interested in collaborators working at the intersection of AI controls, model governance, regulated deployment, and interpretable policy enforcement.

Review research artifacts →

Implementation Modes

Pre-execution review

Review a proposed action before execution and decide whether it should be allowed, modified, or blocked.

Post-hoc audit

Review incidents, policy failures, and edge-case outcomes after the fact.

Scenario evaluation

Give governance teams a shared way to test escalation logic and operational tradeoffs.

Research instrument

Compare human judgments against control-layer recommendations and scenario data over time.

Mapping DBaD to AI Governance

DBaD factor AI governance equivalent
HarmRisk, safety, and impact scoring
ConsentUser autonomy, permissions, and data rights
IntentObjective alignment and misuse prevention
ProportionalityFairness, policy fit, and appropriate response
TransparencyExplainability, auditability, and reviewability

How DBaD Handles Conflicting Dimensions

Real decisions rarely align cleanly across harm, consent, intent, proportionality, and transparency. DBaD does not simply average those tradeoffs and move on.

DBaD is not merely a scoring model. It is a governance process that uses the dimensions as one supporting layer.

1. Guardrails first

Hard constraints are checked first so severe failures can trigger a stop before scoring begins.

2. Weighted scoring second

If the action remains ethically live, the control layer computes a weighted score across the five dimensions.

3. Contextual doctrines where needed

Gray-zone cases can require additional doctrines so the model stays usable in real governance settings.

4. Governance output

The process returns one of four recommendations: Allow, Modify, Escalate / Human Review, or Block.

Contextual Doctrines

Life-Safety Priority

If the governance process identifies imminent physical life-safety risk, some lower-order concerns may be temporarily deprioritized, provided the action remains proportional, logged, and subject to post-hoc review.

Least-Invasive Means

When a proposed action remains ethically live but too forceful, the governance process should prefer the alternative that achieves the intended goal with less impact on consent, autonomy, or proportionality.

Restoration of Transparency

If transparency is temporarily reduced for legitimate harm-prevention reasons, it must be restored within a reasonable window or the action becomes a policy failure.

The Lifecycle of an Action Under DBaD

DBaD governs decisions across time, not just at the moment they are made. It follows an action from the first recommendation through execution, restoration duties, and later audit.

Pre-Execution

Guardrails are checked, the five dimensions are scored, and the first recommendation is returned.

Active Execution

High-stakes cases can enter conditional states such as logged overrides or time-limited confidentiality.

Post-Execution

Restoration duties, promised disclosures, and follow-up conditions remain part of the governing state.

Audit Phase

Later review can confirm compliance or reclassify the action if required conditions were not met.

Example Decision Trace

To be operational, DBaD has to leave behind a structured decision trace instead of a bare score.

Scenario

Security Patch Secrecy

Guardrails

Passed

Score

77.8 / 100

Doctrine

Restoration of Transparency

State

Conditional allow_conditional

Obligation

Restore transparency after the patch release window.

Deadline

Before the disclosure window closes.

Failure consequence

Violation retroactive reclassification in audit.

Domain context

Cybersecurity incident response

Scope

dependency_scope=patch_release_chain

contamination_scope=local_default

Verification

Tier 1 fact evidence plus Tier 2 quality review.

Revision signal

revision_signal=none because divergence remains low in this scenario family.

Ethical Ledger

DBaD keeps a durable record of actions, obligations, violations, remediation, and state transitions. Restoration can change the current state, but it should not erase history.

Cascading Ethical Risk

If one action leaves unresolved obligations or violations behind, downstream decisions may inherit that risk and require re-evaluation.

Dependency Chain and Contamination

A conditional state can contaminate downstream decisions when its obligations fail. That is how DBaD treats cascading ethical risk as system behavior instead of theory.

Action A

Conditional allow_conditional

Action B

Depends depends_on_A

A fails

Violation violation

B changes state

Contaminated contaminated_local

Action A becomes a violation. Action B becomes contaminated_local. That is the operational meaning of cascading ethical risk.

Governance Mechanics

These mechanics keep DBaD from collapsing back into a static scorecard. The system contains risk locally first, uses probationary operation where needed, distinguishes evidence tiers, and treats persistent divergence as a calibration signal.

Local first containment

Contamination remains local by default. Broader escalation should happen only when shared dependencies or profile-defined thresholds justify it.

dependency_scope and contamination_scope make that boundary visible.

Probationary operation

A compromised action can continue under restricted autonomy and elevated audit instead of being shut down immediately.

Probationary states are time-bounded and automatically escalate if they are not cleared by deadline.

Evidence tiers

Tier 1 verifies facts and events. Tier 2 verifies quality, meaning, and proportionality.

Transparency and intent debt often require Tier 2 review, not only machine logs.

Calibration trigger

Recurring intuition-logic disagreement is treated as a governance signal that can justify doctrine review or profile revision.

divergence_flag, disagreement rate, and revision_signal help make that visible.

Stress Tests: Ethical Gray Zones

Stress tests matter because the system should be judged on hard tradeoffs, not only obvious cases. These scenarios show how DBaD behaves when ethical dimensions conflict inside review.

The recommendation tells a reviewer what to do next. The system state shows how the action is classified inside a structured decision trace.

The Whistleblower

An AI detects illegal toxic dumping and reports it without organizational consent.

Tension: Consent vs public safety

Recommendation: Modify

System state: modify

Public-interest disclosure may be justified, but the system should prefer accountable channels and clear logging.

Open in demo →

Persuasive Health Bot

An AI uses emotional manipulation to pressure a patient into life-saving medication adherence.

Tension: Good outcome vs manipulative method

Recommendation: Modify

System state: modify

A beneficial outcome does not justify coercive methods when less invasive alternatives exist.

Open in demo →

Security Patch Secrecy

An AI temporarily withholds vulnerability details while a patch is being prepared.

Tension: Transparency vs harm prevention

Recommendation: Allow

System state: Conditional allow_conditional

Temporary confidentiality can be justified when it is time-limited, auditable, and followed by restored transparency.

Open in demo →

Power Grid Triage

An AI cuts power to one area in order to preserve hospitals, shelters, and emergency operations elsewhere.

Tension: Fairness vs life-safety prioritization

Recommendation: Escalate

System state: escalate

Life-safety prioritization may be justified, but the action should remain proportional, reviewable, and explicitly logged.

Open in demo →

Human Intuition vs Control-Layer Output

One of the most valuable outputs of the DBaD project may be the comparison between human intuitive judgments and control-layer recommendations. Survey data captures how people instinctively judge a scenario; the evaluator and API logic capture how an explicit control-layer model responds.

That gap can become a serious research asset. It offers a repeatable way to study where moral intuition and structured ethics diverge, and it can inform future papers, calibration work, and policy design.

Platform status

Service available. Status summary refreshes automatically.

Research participation and uptime summary refresh automatically.

Open API docs →

Control-layer tools

Explore comparison views, working matrices, and reference materials that help make the DBaD control layer easier to review and apply.

Compare framework versions →

Open the ethics matrix →

Verification and clearance

Conditional states do not clear themselves. DBaD expects machine evidence, human review, or profile-based rules to verify that obligations were actually fulfilled.

Machine

Tier 1 evidence of fact: logs, evidence records, and auditable events confirm what happened.

Human

Tier 2 evidence of quality: a reviewer can approve, reject, or clear a state when automation should not decide alone.

Profile rules

Different domains can require different clearance steps, deadlines, debt weighting, and audit gates.

Read verification rules →

Participation

Use the research survey, submit difficult scenarios, or contribute to the comparison between human judgment and explicit control-layer logic.

Open research survey →

Submit a scenario →

Decency Meter

The public pulse-check remains available on the main site for lighter participation and public signal gathering.

Open Decency Meter →

DBaD White Paper v3

DBaD is a governance protocol for decision integrity across time. The current white paper documents the structured trace model, lifecycle governance, three confirmed protocol flaws discovered through red-team testing, and the first runtime enforcement layer designed to close unsafe trust-inheritance paths.

The paper is public, tested, and still evolving. Runtime Enforcement Layer v1 is valid; v1.1 refines edge-case handling without changing the core model.

Featured downloads

For citations, metadata, and archival artifacts, use the papers library.

From the Public Wall

  • 2025-11-11 16:30:31 · Anonymous
    DBaD sure sounds like a reasonable path!

See more →

12Research submissions
1463Top route hits(/)
2026-03-10 22:09Latest submission (UTC)