| Primary signal |
Behavioral execution — decisions, exceptions, timing, cross-app sequences |
System event logs |
Screen pixels |
Scripted selectors and API calls |
Agent tool calls and responses |
| Example vendors |
TNDRL |
Celonis, IBM, SAP Signavio, ABBYY Timeline |
Soroco, Skan.ai, Microsoft Process Advisor, KYP.ai |
UiPath, Automation Anywhere, Blue Prism, Workato |
LangChain, CrewAI, OpenAI Assistants, ServiceNow AI Control Tower, UiPath Maestro, Microsoft Copilot Studio, Palantir AIP |
| Exception handling |
Maps every exception path, escalation, and workaround automatically |
Only sees exceptions that generate system events |
Screenshots of exception screens without causal understanding |
Script fails or retries — no model of why the exception exists |
Re-prompts and retries at the agent layer — no operational model |
| Pre-automation safety scoring |
Dimensional readiness across 6 risk factors plus evidence-sufficiency gate (withholds a score when observation is insufficient) |
No pre-automation safety scoring |
No pre-automation safety scoring |
None — safety is the builder's responsibility before deployment |
Static policy checks and tool allowlists — not evaluated against workflow evidence |
| Automation artifact |
Living Blueprint — generated from observed evidence via causal variant analysis, with guardrails baked in |
None — produces process maps, not automations |
Candidate shortlist only (record-and-replay is session-level, brittle) |
Hand-authored bots — each workflow scripted manually |
Agent chains composed from tool catalogs — no grounding in behavioral evidence |
| Runtime governance |
Every agent action evaluated at runtime against the Living Blueprint's guardrails. Allow / Escalate / Block. Full audit log. |
Post-hoc conformance checking against modeled process |
No runtime governance capability |
Bot executes whatever it was scripted to do — failure is visible only after damage |
Routing and guardrails at the agent layer — not grounded in human workflow reality |
| Drift detection |
Continuous behavioral monitoring. Drift feeds back into observation — the loop closes. |
Detects drift in system logs — misses behavioral drift |
No continuous monitoring |
Bots break silently when UI changes — not drift awareness |
No drift signal against the human workflow — agent-side metrics only |
| Closed-loop feedback |
Yes — Govern feeds back into See; scores update, the Living Blueprint adapts |
No |
No |
No |
No |
| Privacy model |
Metadata-first. No screenshots. No PII captured. |
No screen data. Relies on system logs (may contain PII in event payloads). |
Screenshots capture everything visible — PII, PHI, credentials |
Depends on bot execution surface (may touch sensitive systems at runtime) |
Depends on upstream tool access — not a collection layer |
| Time to first Workflow Twin |
2 weeks — lightweight desktop collectors, no integrations |
Weeks to months — log integration and API mapping |
Weeks — recording sessions and pixel analysis overhead |
N/A — RPA produces bots, not workflow visibility |
N/A — orchestration produces agent chains, not workflow visibility |
| Compliance readiness |
Metadata-only collection aligned with HIPAA, PCI DSS, SOC 2, GDPR posture |
Logs may contain sensitive data depending on source systems |
Screenshots require extensive DLP and audit controls |
Depends on underlying system access scopes |
Depends on tool-level governance, not workflow-level |