Zero Trust Fabric
The distributed security substrate of Lattix — carrying policy, trust, revocation, telemetry, and validation state across clustered management services and decentralized mesh nodes under tenant-defined quorum.
The Zero Trust Fabric is the distributed security substrate of the Lattix platform. It connects clustered management services and decentralized mesh nodes so policy, trust, revocation, telemetry, and validation state can propagate across the system without depending on a central controller.
Security state may be authored in the management plane, but it is evaluated locally throughout the mesh by a common runtime embedded on every node. Edge participants are not passive consumers of centrally authored policy: they can contribute security-relevant observations and proposed changes, which the fabric admits or rejects through tenant-defined quorum rules designed to support resilient and disconnected operations.
Each organization operates its own fabric. When fabrics establish explicit trust, protected data can move across organizational boundaries while retaining the controls assigned by the originating organization.
What the fabric does
Read the fabric as five behaviors, not five services. Policies and security state originate in the management plane, propagate across both planes, and are evaluated locally wherever a request arrives. Because edge nodes can originate security-relevant proposals of their own, the fabric also has to decide what counts as authoritative. That decision is not hard-coded — it is a tenant-configurable quorum, covered later on this page.
How security state moves
The fabric does not assume that all authoritative security knowledge resides in a centralized management tier. Mesh participants at the edge can originate security-relevant observations, including proposed revocations, trust updates, or policy changes. Admission of those changes is controlled by tenant-defined quorum rules. A tenant may require management-only approval, require at least one management participant in the quorum, or treat management and edge participants equally as part of a full-mesh quorum. This allows the fabric to remain resilient in degraded or disconnected environments while preserving explicit control over trust and authority.
How quorum and convergence work
Quorum is the fabric's admission layer. Every signed change — a new policy version, a revocation, a trust update, an attestation refresh — is a proposal until the tenant's quorum rule says it isn't. Three properties matter:
- Signed proposals, not open writes. Every proposed change carries the identity of its originator and is rejected if the signature cannot be validated against the fabric's enrolled participants.
- Convergence, not consensus at every hop. The fabric converges on the latest admitted state using signed, versioned replication. Two nodes that have both seen the admission event will converge on the same state; two nodes that have only seen part of it will still refuse to evaluate past their replica's TTL.
- Quorum is per-change-class. A tenant may configure revocations to require only one management signature for rapid response, while requiring full-mesh admission for tag-schema changes. The point is tenant-configurable, not a single global setting.
This is what makes the fabric viable in DDIL conditions. A disconnected or degraded environment can continue to evaluate existing policy, admit a local revocation under a full-mesh rule, and later reconverge with the management plane when connectivity returns — without having to surrender authority to a controller that isn't reachable.
How nodes enforce locally
Every mesh node carries the same enforcement stack. A request arrives at the workload, the local PEP intercepts it, the embedded PDP evaluates it against the local policy replica, and — if the decision allows — the KAS mediates authorized KEK use and releases the DEK. The consumer decrypts locally, and a signed evidence event goes to the ledger whether the request was allowed or denied.
This is where fail-closed really bites. A stale policy replica past its TTL refuses to evaluate. An invalid decision signature refuses the KAS request. An unreachable KAS leaves the object sealed. A denied request still writes a denied-access record. The node never reaches out to a central evaluator to "refresh" a decision under pressure — it either has the converged state it needs, or it fails closed.
How fabrics trust each other
Cross-fabric trust does not imply open sharing of internal state. Each organization operates its own Zero Trust Fabric and retains its own authoritative security domain. When two fabrics establish explicit trust, one fabric can validate and enforce the TDF manifest and source-origin controls applied by the other. This allows protected data to move across partner environments while retaining the originating organization's security requirements and reducing the risk of policy loss across organizational boundaries.
Organization B may still layer additional local policy on top — for example, tightening access for a specific consumer group — but it cannot weaken the Org A controls the object was produced under.
Fail-closed behavior
The fabric fails closed across every layer. Node-local failures (stale replica, invalid decision, unreachable KAS) are covered in the local-evaluation diagram above. The cross-cutting properties are:
- Partition tolerance, not controller dependence. When the data plane is partitioned from the management plane, existing policy continues to evaluate locally on the last converged state. New versions simply don't reach the node until connectivity returns.
- Quorum-bound admission. A proposed change that does not meet the tenant's quorum rule is not applied. Under a full-mesh rule, DDIL environments can still admit urgent revocations locally while isolated from the management plane.
- Deny still produces evidence. Failed requests, rejected proposals, and expired decisions all write signed events to the ledger. The absence of a record continues to be meaningful because every relevant operation — allowed or denied — produces one.
The effect is that partial connectivity failures degrade capability rather than remove it, and every participant — management or edge — defaults to denial when it cannot trust the state it has.
Relationship to other concepts
- The Trusted Data Format is the envelope the fabric carries, and it is the artifact cross-fabric trust validates.
- Policies and ABAC are authored in the management-plane PAP and evaluated by the common PDP runtime embedded in every mesh node.
- The Hierarchical Key Model is how the distributed KAS instances gate cryptographic unwrap operations on each authorized decision.
- Every fabric event — decision, proposal admission, revocation, and key release — is recorded on the Immutable Ledger.