Core Philosophy
Understanding the design principles behind the Vyuh Workflow Engine.
Token-Based Execution
Unlike imperative workflow engines that execute code linearly, the Vyuh Workflow Engine uses tokens to track execution position. This enables:
- Parallel Execution: Multiple tokens can exist simultaneously
- Resume/Recovery: Exact position is persisted for crash recovery
- Visualization: Token positions can be displayed in a UI
The workflow instance tracks token positions:
{
"instanceId": "wf-123",
"tokens": [
{ "id": "t1", "currentNodeId": "validate", "isActive": true }
]
}As the workflow executes, tokens move through nodes, tracking the current execution position.
Unified Workflow Model
The engine uses a unified model where Workflow is both:
JSON-Serializable
A workflow can be:
- Stored in a database as JSON
- Versioned and migrated
- Loaded at runtime
- Created via visual editors or code
Executable
The same workflow object has:
- Type-safe node configurations (TaskNodeConfiguration, UserTaskNodeConfiguration, etc.)
- Attached executors resolved via the TypeRegistry
- Full runtime capabilities
// Load from storage - JSON deserialization with type resolution
final workflow = await engine.loadWorkflow(workflowId);
// Execute immediately
final instance = await engine.startWorkflow(
workflowCode: workflow.code,
input: {'entityId': '123'},
);This unified model eliminates the complexity of separate "definition" and "executable" layers.
Descriptor-Based Executors
Instead of embedding logic directly in workflow definitions, executors are registered via descriptors:
// Define executors in a descriptor
final descriptor = WorkflowDescriptor(
title: 'My Executors',
tasks: [SendEmailTaskExecutor.typeDescriptor],
);
// Create context with all descriptors
final context = RegistryDeserializationContext(
descriptors: [DefaultWorkflowDescriptor(), descriptor],
);
// Create engine with context and storage
final engine = WorkflowEngine(
context: context,
storage: InMemoryStorage(context: context),
);
await engine.initialize();
// The workflow uses executor classes directly
.task('sendEmail', executor: SendEmailTaskExecutor())Benefits:
- Separation of Concerns: Business logic is in executors, flow logic is in definitions
- Reusability: Same executor can be used across workflows
- Testability: Executors can be unit tested independently
- Security: Workflow definitions don't contain executable code
Signal-Based Coordination
The engine uses signals for all external interactions:
- User Tasks: Create an inbox item, wait for signal with user's response
- External Systems: Wait for webhook callbacks
- Timers: Wait for timer to fire (sends a signal)
// Wait for a signal
.signalWait('awaitPayment', signal: 'payment_completed')
// Later, external system sends signal
await engine.sendSignal(
workflowInstanceId: instanceId,
node: 'awaitPayment', // The node ID waiting for the signal
payload: {'transactionId': 'TXN-123'},
);BPMN-Inspired Patterns
The engine supports standard BPMN gateway patterns:
| Pattern | Gateway | Description |
|---|---|---|
| Exclusive Choice | oneOf (XOR) | Route to exactly ONE path |
| Multi-Choice | anyOf (OR) | Route to ONE OR MORE paths |
| Parallel Split | allOf (AND) | Route to ALL paths |
| Synchronization | allOf join | Wait for ALL branches |
Pluggable Storage
The WorkflowStorage interface abstracts persistence:
abstract class WorkflowStorage {
WorkflowRepository get workflows;
WorkflowInstanceRepository get instances;
UserTaskInstanceRepository get userTaskInstances;
WorkflowEventRepository get events;
Future<T> transaction<T>(Future<T> Function() operation);
Future<void> initialize();
Future<void> dispose();
}Implement for your database: PostgreSQL, MongoDB, SQLite, etc.
Idempotency
All operations are designed to be idempotent for crash recovery:
- Starting a workflow checks if instance already exists
- Task executors should check if work is already done
- User task creation checks for existing active task
- Signal processing is deduplicated
class MyTaskExecutor extends TaskExecutor {
static const _schemaType = 'task.myTask';
@override
String get schemaType => _schemaType;
@override
String get name => 'My Task';
@override
Future<TaskResult> execute(ExecutionContext ctx) async {
// IDEMPOTENT: Check if already processed
final existing = await findExisting(ctx.input['id']);
if (existing != null) {
return TaskSuccess(output: existing.toJson());
}
// Process and return
final result = await process(ctx.input);
return TaskSuccess(output: result.toJson());
}
}Type Safety
The engine leverages Dart's type system:
- Generic Executors:
TypeRegistry<TaskExecutor>,TypeRegistry<ConditionExecutor> - Sealed Results:
NodeResulthierarchy for exhaustive handling - Type Descriptors:
TypeDescriptor<T>for JSON serialization
Next Steps
- Architecture - Detailed system design
- Workflow - Understanding workflow structure
- Tokens - Deep dive into token mechanics