Core Concepts

Zero Trust Fabric

The distributed security substrate of Lattix — carrying policy, trust, revocation, telemetry, and validation state across clustered management services and decentralized mesh nodes under tenant-defined quorum.

The Zero Trust Fabric is the distributed security substrate of the Lattix platform. It connects clustered management services and decentralized mesh nodes so policy, trust, revocation, telemetry, and validation state can propagate across the system without depending on a central controller.

Security state may be authored in the management plane, but it is evaluated locally throughout the mesh by a common runtime embedded on every node. Edge participants are not passive consumers of centrally authored policy: they can contribute security-relevant observations and proposed changes, which the fabric admits or rejects through tenant-defined quorum rules designed to support resilient and disconnected operations.

Each organization operates its own fabric. When fabrics establish explicit trust, protected data can move across organizational boundaries while retaining the controls assigned by the originating organization.

What the fabric does

The Zero Trust Fabric organised as three bands — Management Plane, Fabric Propagation Layer, Data Plane — with five behaviors running across them: Author, Propagate, Evaluate, Adapt, Converge. Callouts emphasize that edge and management participants can contribute security-relevant changes under tenant-defined quorum, and that cross-fabric trust validates source-fabric controls rather than replacing them./ ZERO TRUST FABRICAuthor centrally. Evaluate locally. Adapt at the edge. Converge without a controller.MANAGEMENT PLANEclustered services · authoring · coordination · key services · state replicasSERVICESERVICESERVICEFABRIC PROPAGATION LAYERsigned policy bundles · revocations · trust updates · telemetry · attestation · schema updates · key rotation notices · edge proposals↓ PUBLISH↑ OBSERVE↓ REPLICATE↑ PROPOSEAUTHORmgmt-planePROPAGATElateral + verticalEVALUATEevery node, locallyADAPTedge-originated proposalsCONVERGEunder tenant quorumDATA PLANEdecentralized mesh · local evaluation · policy replicas · partition-tolerantMESH NODEMESH NODEMESH NODEMESH NODEMESH NODEMESH NODEEDGE PARTICIPATIONEdge & management participants can contribute security-relevant changes.Admission is governed by tenant-defined quorum.CROSS-FABRIC TRUSTPartner fabrics validate & enforce source-fabric controls —they do not copy, override, or weaken them.

Read the fabric as five behaviors, not five services. Policies and security state originate in the management plane, propagate across both planes, and are evaluated locally wherever a request arrives. Because edge nodes can originate security-relevant proposals of their own, the fabric also has to decide what counts as authoritative. That decision is not hard-coded — it is a tenant-configurable quorum, covered later on this page.

How security state moves

Security state propagation and convergence: management replicas publish signed state that propagates laterally between replicas and downward to mesh nodes, which in turn propagate laterally among peers and can emit telemetry, revocations, trust updates, policy proposals, and attestation back upward. A tenant-configured quorum layer determines whether a proposed change is admitted./ SECURITY STATE PROPAGATION & CONVERGENCEMGMT REPLICASMGMT SERVICE Aauthors · publishessigned stateMGMT SERVICE Breplicates statelateral convergenceMGMT SERVICE Creplicates statelateral convergenceDOWNWARDsigned bundlesUPWARDtelemetry · proposalsMESH NODESNODEreceives & evaluatesemits:· telemetry· revocations· proposalsNODEreceives & evaluatesemits:· trust updates· attestation· proposalsNODEreceives & evaluatesemits:· telemetry· revocations· attestationQUORUM ADMISSION · TENANT-CONFIGURABLEA proposed change is admitted only when the tenant's quorum rule is satisfied.MGMT-ONLYonly management participantsadmit a change · edgeproposals are advisory onlyHYBRIDquorum must include at leastone management participant;edge nodes contribute weightFULL-MESHmanagement and edge treatedequally · for DDIL & fullydisconnected operationQUORUM COMPOSITION IS TENANT-CONFIGURABLE TO SUPPORT DDIL AND OPERATIONAL TRUST MODELS

The fabric does not assume that all authoritative security knowledge resides in a centralized management tier. Mesh participants at the edge can originate security-relevant observations, including proposed revocations, trust updates, or policy changes. Admission of those changes is controlled by tenant-defined quorum rules. A tenant may require management-only approval, require at least one management participant in the quorum, or treat management and edge participants equally as part of a full-mesh quorum. This allows the fabric to remain resilient in degraded or disconnected environments while preserving explicit control over trust and authority.

How quorum and convergence work

Quorum is the fabric's admission layer. Every signed change — a new policy version, a revocation, a trust update, an attestation refresh — is a proposal until the tenant's quorum rule says it isn't. Three properties matter:

  • Signed proposals, not open writes. Every proposed change carries the identity of its originator and is rejected if the signature cannot be validated against the fabric's enrolled participants.
  • Convergence, not consensus at every hop. The fabric converges on the latest admitted state using signed, versioned replication. Two nodes that have both seen the admission event will converge on the same state; two nodes that have only seen part of it will still refuse to evaluate past their replica's TTL.
  • Quorum is per-change-class. A tenant may configure revocations to require only one management signature for rapid response, while requiring full-mesh admission for tag-schema changes. The point is tenant-configurable, not a single global setting.

This is what makes the fabric viable in DDIL conditions. A disconnected or degraded environment can continue to evaluate existing policy, admit a local revocation under a full-mesh rule, and later reconverge with the management plane when connectivity returns — without having to surrender authority to a controller that isn't reachable.

How nodes enforce locally

Local evaluation and authorized unwrap inside a single mesh node. The workload's request passes through a local PEP, which consults the embedded PDP against the local policy replica, which asks the KAS for authorized KEK use, which releases the DEK for local decrypt and records a signed evidence event. Any stale replica, invalid signature, or unreachable KAS fails the request closed./ LOCAL EVALUATION & AUTHORIZED UNWRAPMESH NODEWORKLOADrequest inPEPlocal · in-pathEMBEDDED PDPevaluates against replicaLOCAL POLICY REPLICATTL-bounded · convergedKAS REQUESTsigned decisionDEK · decrypt locallyKAS1. verify signed decision2. bind to object identity3. authorize KEK use4. release DEK5. write ledger eventkeeps KEK inside trustedkey service boundarySIGNED EVIDENCE → IMMUTABLE LEDGERevery allow, deny, and unwrap recorded — even under partitionFAIL-CLOSED TRIGGERS· stale policy beyond TTL → deny· invalid decision signature → deny· KAS unavailable → sealed· unreachable partition → sealed· denial event still recorded

Every mesh node carries the same enforcement stack. A request arrives at the workload, the local PEP intercepts it, the embedded PDP evaluates it against the local policy replica, and — if the decision allows — the KAS mediates authorized KEK use and releases the DEK. The consumer decrypts locally, and a signed evidence event goes to the ledger whether the request was allowed or denied.

This is where fail-closed really bites. A stale policy replica past its TTL refuses to evaluate. An invalid decision signature refuses the KAS request. An unreachable KAS leaves the object sealed. A denied request still writes a denied-access record. The node never reaches out to a central evaluator to "refresh" a decision under pressure — it either has the converged state it needs, or it fails closed.

How fabrics trust each other

Cross-fabric trust: a protected object produced in Organization A's fabric moves to Organization B's fabric. Organization B does not replace A's controls — it validates the TDF manifest and enforces the source-origin requirements on access, optionally layering its own additional policy, but never weakening the source controls./ CROSS-FABRIC TRUST WITH SOURCE-ORIGIN CONTROLSORG A FABRICauthoritative sourceORG A MGMT PLANEPAP · PDP · KAS · evidenceORG A MESHproduces & protects objectsPROTECTED OBJECTsigned TDF manifestcarrying Org A source controlsOBJECT CROSSES BOUNDARYsource controls remain boundEXPLICIT TRUST RELATIONSHIPno open sharing of internal stateORG B FABRICvalidating & enforcing partnerORG B MGMT PLANEPAP · PDP · KAS · evidenceORG B MESHconsumes under Org A controlsVALIDATE & ENFORCEsource-origin controls on accessmay layer local policy · never weakenWHAT THIS IS NOT· not shared authority · not merged policy · not a copy of source tenant stateWHAT IT IS· validation of signed manifest · enforcement of source-origin controls · preservation of source authority

Cross-fabric trust does not imply open sharing of internal state. Each organization operates its own Zero Trust Fabric and retains its own authoritative security domain. When two fabrics establish explicit trust, one fabric can validate and enforce the TDF manifest and source-origin controls applied by the other. This allows protected data to move across partner environments while retaining the originating organization's security requirements and reducing the risk of policy loss across organizational boundaries.

Organization B may still layer additional local policy on top — for example, tightening access for a specific consumer group — but it cannot weaken the Org A controls the object was produced under.

Fail-closed behavior

The fabric fails closed across every layer. Node-local failures (stale replica, invalid decision, unreachable KAS) are covered in the local-evaluation diagram above. The cross-cutting properties are:

  • Partition tolerance, not controller dependence. When the data plane is partitioned from the management plane, existing policy continues to evaluate locally on the last converged state. New versions simply don't reach the node until connectivity returns.
  • Quorum-bound admission. A proposed change that does not meet the tenant's quorum rule is not applied. Under a full-mesh rule, DDIL environments can still admit urgent revocations locally while isolated from the management plane.
  • Deny still produces evidence. Failed requests, rejected proposals, and expired decisions all write signed events to the ledger. The absence of a record continues to be meaningful because every relevant operation — allowed or denied — produces one.

The effect is that partial connectivity failures degrade capability rather than remove it, and every participant — management or edge — defaults to denial when it cannot trust the state it has.

Relationship to other concepts

  • The Trusted Data Format is the envelope the fabric carries, and it is the artifact cross-fabric trust validates.
  • Policies and ABAC are authored in the management-plane PAP and evaluated by the common PDP runtime embedded in every mesh node.
  • The Hierarchical Key Model is how the distributed KAS instances gate cryptographic unwrap operations on each authorized decision.
  • Every fabric event — decision, proposal admission, revocation, and key release — is recorded on the Immutable Ledger.