Term 1
Behavioral Context Layer
Plain English
The data layer that shows how work is actually done across systems, screens, and decisions — capturing what system logs and screenshots miss.
Why It Matters
AI agents need context about exceptions, escalations, and judgment calls before they can safely automate. Event logs show what systems recorded. Screenshots show pixels. The behavioral context layer shows what people actually do — the decision points, the workarounds, the moments where rules bend under real operational pressure.
How TNDRL Implements It
Lightweight desktop and browser collectors (Fiber) observe real work execution continuously. Metadata only — no screenshots, no PII. TNDRL captures which applications were used, in what sequence, with how much time between steps, and which decision branches were taken. The signal includes timing anomalies, rework loops, and escalation triggers — the raw material for understanding whether a workflow is safe to automate.
See also: Workflow Twin
Term 2
Workflow Twin
Plain English
A live, continuously updated operational model of how a workflow actually runs — including every variant, exception path, escalation, and timing pattern.
Why It Matters
Static process maps are outdated before they're finished. A Workflow Twin reflects current reality and evolves as operations change. It becomes the single source of truth for how work actually flows — not how it's supposed to flow in documentation, but how it flows on Tuesday afternoon under deadline pressure.
How TNDRL Implements It
Weave (TNDRL's intelligence engine) assembles captured behavioral data into a living workflow graph with happy paths, variants, exceptions, decision points, and timing annotations. The graph updates continuously as new behavioral data arrives. Every edge represents a transition that real people took. Every node represents a stable decision point. Visualization shows not just the primary path but the full distribution of real behavior — where people deviate, why they deviate, and how often. Weave also performs Automation Readiness scoring — a composite across two co-equal sub-components, Process Stability (Consistency, UI stability, Repetition) and Execution Risk (Complexity, Data structure, Exception rate, with Compliance risk as the seventh canonical dimension, scoring engine shipping next), with interaction penalty functions that pair a Process Stability dimension with an Execution Risk dimension to catch compounding risk a linear combination would miss.
See also: Automation Readiness Score
Term 3
Automation Readiness Score
Plain English
A composite 0–100 score that tells you whether a workflow is safe to delegate to AI agents — not binary, but dimensional.
Why It Matters
Without a readiness score, enterprises are guessing which workflows to automate. That's how automation projects fail — by automating exception-heavy, high-variance work that agents can't handle. A readiness score makes risk visible before you commit capital and brand trust to automation.
How TNDRL Implements It
Weave (TNDRL's intelligence engine) scores Automation Readiness as a composite across two co-equal sub-components: Process Stability (Consistency, UI stability, Repetition) and Execution Risk (Complexity, Data structure, Exception rate, with Compliance risk as the seventh canonical dimension — scoring engine shipping next). Every filed interaction penalty function pairs one Process Stability dimension with one Execution Risk dimension, catching the looks stable but execution is risky patterns that a linear combination would miss. An evidence-sufficiency admissibility gate withholds the score entirely when the observational basis is insufficient — TNDRL refuses to produce a low-confidence score rather than mislead the buyer. A workflow scoring 85+ is safe to delegate. 70–84 needs guardrails. 50–69 needs redesign first. Below 50: do not automate. Trellis (TNDRL's governance engine) then enforces the readiness gate: only workflows that meet safety thresholds are promoted to governed operation. See the Automation Readiness deep-dive for the full dimensional breakdown.
See also: Process Stability
See also: Execution Risk
See also: Entropy
See also: Living Blueprint
Term 3a
Process Stability
Plain English
One of the two co-equal sub-components of the Automation Readiness Score. Answers a single question: can an agent run this workflow predictably?
Why It Matters
Stability is the precondition for safe automation. A workflow that executes inconsistently, runs on interfaces that change, or happens too rarely to build statistical confidence is a workflow where agents will hit decisions they weren't prepared for. Process Stability isolates that question so the buyer can see it directly — not hidden inside a single composite number.
How TNDRL Implements It
Process Stability rolls up three dimensions computed from observed behavioral execution: Consistency (how deterministic is execution across sessions?), UI stability (how stable are the interfaces the workflow touches? — aggregates instability flag patterns including rework loops, failed actions, loading stalls, and navigation deviations), and Repetition (how frequently does this workflow actually run, and how consistent is that frequency?). Each dimension is scored 0–100 with high meaning good. The Process Stability value is surfaced as a grouping on every AR readout; the dimensions themselves remain the unit of computation.
See also: Automation Readiness Score
See also: Execution Risk
Term 3b
Execution Risk
Plain English
One of the two co-equal sub-components of the Automation Readiness Score. Answers a single question: if automation misbehaves on this workflow, how bad is the damage?
Why It Matters
Two workflows with identical stability can carry wildly different blast radii. One touches structured data in a single system; the other touches regulated data across four systems with high exception rates. Execution Risk isolates the damage question so automation decisions account for what happens when things go wrong — not just whether the happy path works.
How TNDRL Implements It
Execution Risk rolls up the dimensions that predict damage potential: Complexity (how many systems does the workflow span?), Data structure (how structured are the inputs agents will reason over?), Exception rate (how often does execution deviate from the happy path?), and — as the seventh canonical dimension, promoted 2026-04-15 — Compliance risk (regulatory exposure if automation misjudges the work; scoring engine shipping next). Every filed interaction penalty function in the scoring engine pairs an Execution Risk dimension with a Process Stability dimension, catching the looks stable but execution is risky patterns a linear combination would miss.
See also: Automation Readiness Score
See also: Process Stability
Term 4
Entropy
Plain English
How chaotic a workflow is — measuring branching, inconsistency, and unpredictability in execution paths.
Why It Matters
High entropy means agents face unpredictable decision points. It signals whether automation needs tighter guardrails or the process needs redesign first. Low entropy workflows are candidates for rapid automation. High entropy workflows need human judgment or deeper process understanding before agents can handle them.
How TNDRL Implements It
TNDRL computes entropy as the Shannon entropy of the variant frequency distribution observed across the session population. Low entropy means the majority of sessions follow a small number of dominant variants — predictable, repeatable paths. High entropy means many distinct execution routes with no dominant variant — substantial behavioral diversity. The entropy score appears in the Workflow Twin visualization and sits alongside AR as a diagnostic companion — it explains why a workflow's AR is what it is (high entropy means the agent will face unpredictable branching), but it is not a dimension of AR and not a peer decision signal. High-entropy steps often correlate with judgment calls, exception paths, or workflows that need redesign before automation.
See also: Automation Readiness Score
Term 5
Flow Efficiency
Plain English
The ratio of value-adding work to total elapsed time — how much of the process is productive versus waiting, reworking, or unnecessary handoffs.
Why It Matters
Shows where the biggest efficiency gains are, whether through process redesign or automation. For example, a workflow with 40% flow efficiency means the remaining elapsed time is consumed by waiting, rework, or unnecessary handoffs. That's where agents, better tooling, or process simplification can create immediate ROI.
How TNDRL Implements It
Measured from behavioral timing data across every execution. TNDRL tracks active work (typing, clicking, system response) versus idle time (waiting for approvals, waiting for another system to respond, manual data lookup). Surfaces wait states, redundant steps, and rework loops automatically. Flow Efficiency sits alongside AR as a diagnostic companion — it answers where the time is going, which often drives redesign-vs-automate decisions — but it is not a dimension of AR and not a peer decision signal. A workflow with low Flow Efficiency is often a redesign candidate even when its AR score is high; a workflow with high Flow Efficiency can still be dangerous to automate if its Execution Risk dimensions are poor.
See also: Automation Readiness Score
Term 6
Living Blueprint
Plain English
The deployed automation TNDRL generates from observed work. A machine-readable execution plan with approved paths, blocked paths, escalation rules, and runtime constraints — the guardrails learned from observation, baked in and traveling with the automation.
Why It Matters
Without governance, agents operate without boundaries. A Living Blueprint defines what an agent can do, what it can't, and when to escalate to a human. The "living" prefix matters: the blueprint stays in sync with how work actually runs and adapts when drift is detected. It's the difference between a brittle script and safe, governed automation.
How TNDRL Implements It
Generated by Sprout from the Workflow Twin and Automation Readiness Score via population-level causal variant analysis across the corpus of observed sessions. Includes safety thresholds (e.g., escalate if confidence drops below 70%), compliance rules (which steps require human review), and rollback triggers. The blueprint progresses through two lifecycle states on the Trellis: Climbing (supervised early operation with close human oversight) and Anchored (fully operational, autonomous within governed bounds). Trellis evaluates every agent action against the blueprint's guardrails at runtime. Patent pending.
See also: Drift Monitoring
Term 7
Drift Monitoring
Plain English
Continuous detection of when real work starts diverging from the approved model or safe operating conditions.
Why It Matters
Workflows change. People find new workarounds. Systems get updated. Drift monitoring catches when your automation model is no longer accurate — before it causes problems. Without drift detection, agents continue executing against stale policy while real work has evolved elsewhere.
How TNDRL Implements It
Behavioral observation continues post-deployment. TNDRL compares live execution against the Living Blueprint and alerts when divergence exceeds thresholds. Drift can be structural (new decision branches appearing in the Workflow Twin) or policy-based (approval rules changing, new compliance requirements). Alerts appear in the web app with severity (informational vs. critical) and recommended actions (update the blueprint, suspend automation, escalate to compliance review).
See also: Living Blueprint
See also: Collection Integrity
Term 8
Collection Integrity
Plain English
The guarantee that data shown in the product has a real, complete path from collection to display — with no gaps, stubs, or fabricated data.
Why It Matters
Enterprise buyers need to trust that what they see in TNDRL reflects reality. Collection integrity means every score, every metric, and every workflow visualization is backed by real observed behavior. No hardcoded examples. No demo data masquerading as live data. No scoring algorithms using synthetic inputs.
How TNDRL Implements It
Tiered sync architecture — Tier 1 metadata always flows, Tier 2 sanitized data on schedule, Tier 3 raw data stays local. Classification and masking happen at the source before transmission. Every behavioral event carries source provenance metadata: when it was captured, by which collector, from which process, and how many validation passes it completed. The web app surfaces collection health (percentage of time collectors are active, percentage of machines enrolled, sync latency) so administrators can see gaps and trust the signal.
See also: Behavioral Context Layer