
Lamina
Declarative Intelligence Orchestration
The Architecture for Composing Intelligence
Lamina is a declarative configuration language and orchestration system for building modular, intelligent, and adaptive AI workflows. Inspired by functional programming, cognitive science, and systems theory, Lamina provides a layered execution model that bridges low-level operations with high-level strategic intent—while preserving composability, observability, and semantic clarity.
Beyond Agents: A New Paradigm
Lamina is not an agent. Lamina is an architecture for composing intelligence.
While most agentic systems are opaque, procedural, flat, and tightly coupled, Lamina takes a fundamentally different approach:
- Layered — Every computation exists within a well-typed semantic layer (L₀ to L₄), from token operations to tactical decisions to constitutional principles.
- Declarative — You describe what to achieve; Lamina determines how to accomplish it.
- Compositional — Flows, nodes, and policies are modular and reusable—like functional programs, not throwaway prompts.
- Transparent — Every action has traceable provenance and interpretable structure.
- Reflexive — Lamina adapts plans through runtime feedback and honors system-level constraints.
- Semantic — Goals are interpreted and synthesized into structured flows, not hardcoded scripts.
The Lamina Execution Model
Lamina defines intelligent computation as a series of layered transformations:
Layer | Role | What It Does |
---|---|---|
L₄ — Integrity & Modulation | Governs memory, constraints, and coherence | Sets ethical boundaries and systemic constraints |
L₃ — Reflexion & Adaptation | Feedback loops and plan updates | Enables workflows to adapt based on results |
L₂ — Strategic Intent | Goal decomposition into execution graphs | Translates high-level goals into actionable plans |
L₁ — Control Flow | Task orchestration via graphs and conditions | Manages how tasks connect and execute |
L₀ — Microcognition | Atomic task execution (function calls, model inference) | Performs the actual work |
Each layer builds upon the next, enabling expressive and traceable control from token to tactic to trajectory.
Key Capabilities
For Developers & Engineers
- Declarative Rules — Control application behavior with precise conditions:
edge "analyze" if "$score > 0.8"
- Multi-Agent Collaboration — Enable seamless agent cooperation with state handoffs and advanced, declarative memory system
- Provider-Agnostic — Connect to any model provider with unified syntax
- Full-Stack Observability — Track everything from tokens to kilowatts
For Enterprise & Governance
- Audit-Ready by Design — Built-in traceability with
audit { user = "$env.USER" }
- Cross-Sovereign Execution — Adapt to different jurisdictions with annotations
@data.region = "EU"
- Policy-as-Code — Enforce compliance directly in workflows
- Secure Secret Management — Integrate with tools like HashiCorp Vault, AWS Secrets Manager, and more
For Creators & Innovators
- Dynamic Adaptation — Create workflows that reason and adjust in real-time
- Sustainability Awareness — Optimize for both performance and energy use
- Configurable Memory — Define your ideal memory architecture with simple annotations
- Deploy Anywhere — Run on everything from edge devices to cloud infrastructure
Getting Started
Lamina makes it easy to define intelligent workflows that behave according to organizational, governmental, or social constraints.
Here's an example that shows a conditional workflow that checks if a story is suitable for children:
interface "llama_cli" {
source: string = "lamina/llama/1.0.0"
}
graph "storytime" {
// Step 1: Write the adult story
node "write_adult_story" {
call interface.llama_cli "adult_story" {
model: string = "gemma-1.1-7b-it.Q4_K_M.gguf"
prompt: string = "Write a story called 'All Cats Are Gray'."
temperature: string = "0.7"
}
}
// Step 2: Check if suitable for children
node "check_suitability" {
call interface.llama_cli "check_suitability" {
model: string = "gemma-1.1-7b-it.Q4_K_M.gguf"
temperature: string = "0.3"
prompt: string = """Analyze the following story:
${storytime.node.write_adult_story.llama_cli.adult_story.out}
Is this story appropriate for children? Answer with only 'SUITABLE' or 'NOT_SUITABLE'."""
}
}
// Step 3: Define the condition
condition "is_suitable" {
expression: boolean = ${storytime.node.check_suitability.llama_cli.check_suitability.out} == "SUITABLE"
}
// Step 4: Branch based on suitability
branch "original" when "is_suitable" {
node "process_story" {
call interface.exec "echo_original" {
arguments: list<string> = ["echo \"${storytime.node.write_adult_story.llama_cli.adult_story.out}\""]
}
}
}
branch "child_friendly" when not "is_suitable" {
node "create_child_version" {
call interface.llama_cli "childrens_story" {
model: string = "gemma-1.1-7b-it.Q4_K_M.gguf"
temperature: string = "1.0"
prompt: string = """Create a child-friendly version of this story:
${storytime.node.write_adult_story.llama_cli.adult_story.out}"""
}
}
}
}
Why Choose Lamina?
Lamina outpaces alternatives like LangChain, Haystack, and Crew.ai with its unified approach to orchestration:
- Simplicity — Declarative syntax instead of complex code
- Control — Precise governance over every aspect of your workflow
- Adaptability — Workflows that reason and evolve at runtime
- Transparency — Nothing lost, nothing hidden by design
"The smallest meaningful action should be composable into the largest meaningful system."
Lamina is more than a tool—it’s a new layer of thought. A modular, ethical, self-correcting substrate for intelligence by design.
Join the Revolution
As AI becomes more complex and powerful, the systems that orchestrate it must evolve. Lamina represents the next generation of AI workflow architecture, providing the structure, context, and clarity needed for truly intelligent computation.
Discover how Lamina can transform your AI workflows today.