2025 was the year the moving parts started to lock into place.
Lexiana matured into a layered legal intelligence system that mirrors how lawyers actually reason. Our forensic stack for synthetic media went from concept to early architecture. Persona shifted from “avatar tech” to a serious interface for long form, high stakes communication. Underneath all of that, our collaboration with Inmeso pushed our decision engines toward traceable, composable, multi context reasoning.
2026 will be the year we harden this into an ecosystem that can survive European regulation, professional scrutiny and judicial cross examination.
This is how we see the next year.
Markets we are building for
The external environment is tightening around exactly the domains we serve.
In Europe, the implementation timeline of the AI Act means that by August 2026 high risk systems will be squarely in scope, with full rollout foreseen by 2027. General provisions and prohibitions arrive first, rules for general purpose AI follow, then high risk obligations for operators, especially for systems that affect rights and safety in Annex III domains such as credit, employment and certain legal processes.
For legal AI specifically, the numbers are no longer niche:
-
The global legal AI market is estimated at roughly 1.4 billion US dollars in 2024 and projected to reach nearly 4 billion by 2030, with around 17 percent annual growth.
-
Reports focused on legal AI software forecast growth from a bit over 3 billion dollars in 2025 to more than 10 billion by 2030.
-
Broader legaltech AI projections point to an eight billion dollar market by 2030 with growth rates near 27 percent, driven by document management, ediscovery, case management and billing automation.
At the same time, the alternative legal services sector has reached tens of billions in annual revenue, with managed services and tech enabled offerings growing fastest. Generative AI is already singled out as a force multiplier for these providers.
On the evidence side, the curve is even sharper. Deepfake and fake image detection markets are forecast to grow at more than 30 percent per year in the second half of the decade, reflecting an acceleration in both attack surface and demand for forensic tooling.
Meanwhile, the C2PA standard for content credentials is moving from specification to practical adoption, with major media and technology players experimenting with stamping and verifying provenance at scale.
Our roadmap for 2026 assumes three things:
-
Legal AI becomes part of the basic infrastructure of European legal practice rather than an experiment at the edge.
-
Courts, insurers and regulators will require not just model output but verifiable provenance and calibrated forensic evidence.
-
Organisations will need decision engines that can operate under AI Act obligations without turning every deployment into a bespoke research project.
Lexiana, Verify, Persona, our shared decision stack with Inmeso and our Academy are the instruments we are tuning against those conditions.
Lexiana in 2026: legal intelligence under audit
Lexiana has never been a chatbot. It is a legal intelligence system: retrieval guided reasoning, clause aware analysis, citation integrity and provenance tracing, designed from the start to match how legal professionals think and argue.
February 2026 is the next inflection point. That is when we release Lexiana 3.5.
Lexiana 3.5 as a system upgrade
Lexiana 3.5 brings together several winter development lines into one release:
-
A native macOS desktop application built around hybrid inference: on device preprocessing, real time semantic indexing and accelerated routing for large case files and multi document reviews
-
A redesigned retrieval stack with a dedicated memory compression layer, allowing deeper reasoning over long litigation histories and dense regulatory materials without exploding context windows
-
A rebuilt clause reasoning engine that understands statute language, cross references and case law segments as a structured graph instead of a flat text blob
-
A more resilient citation integrity system where every answer carries a reconstructable evidence chain, not just a list of references
In practice, this means that a complex regulatory or litigation question in Dutch or Spanish can be answered against many interacting sources, with Lexiana exposing the path it took through the material.
Instrumented reasoning as a first class feature
Early 2026 also marks the deployment of reasoning instrumentation inside Lexiana 3.5.
Every significant reasoning step can be recorded as a structured event:
-
query interpretation and reformulation
-
selection and weighting of retrieval candidates
-
clause level focus shifts between statutes, contracts and case law
-
branch and prune decisions inside longer chains of argument
These events feed into a universal provenance trace that spans Lexiana and, increasingly, other products. The trace is versioned and exportable. For internal QA it functions as a microscope; for clients in high risk domains it becomes an audit trail; for courts it can form part of an evidential record explaining how AI assisted reasoning contributed to a decision.
This is crucial in a world where high risk classifications under the AI Act demand documented assessments, transparent operation and the ability to demonstrate that AI systems behave within defined boundaries in practice, not only on paper.
Compliance aligned, not compliance burden
By mid 2026, we expect many clients to be in the final preparation stages for high risk AI obligations and for sectoral regulators to start asking harder questions about explainability and control.
To make Lexiana a help rather than a burden in that context, 2026 will add three governance layers:
-
Policy constrained inference, where administrators can encode which jurisdictions, sources and model families are allowed for a given workflow and have those constraints enforced at runtime
-
Scenario safe sandboxing, where hundreds of synthetic scenarios can be run against realistic structures without touching production data, supporting internal risk mapping and training
-
Evidence bundle export, where not only citations but the full evidence chain and reasoning context can be packaged into signed artefacts for internal or external review
The ambition is clear. If Lexiana contributes to a legal reasoning process that is later challenged, professionals should have the tools and artefacts to show exactly how it contributed.
Verify in 2026: from provenance to evidence
Our November 2025 essay on the coming evidence crisis set out a problem that is already visible in courts and investigations. Synthetic audio, video and imagery erode the default trust legal systems place in recorded evidence. The burden shifts from “is this plausible” to “can this be proven authentic beyond reasonable doubt”.
Throughout 2025 we laid the legal and technical foundations for Verify, our forensic AI system for synthetic media. In 2026 we begin to operationalise it.
Field calibrated deepfake forensics
Detection models are only useful if their behaviour is understood under real conditions. Building on the early architecture, Verify in 2026 focuses on three data realities:
-
low bitrate, heavily compressed messaging and social media artefacts
-
partial or corrupted files, especially in audio
-
mixed provenance environments where some material carries content credentials and some does not
Each detector will be accompanied by an explicit calibration profile that documents error rates, operating thresholds and known failure modes under defined conditions. Reports will avoid binary “real or fake” statements and instead express reasoned, statistically grounded conclusions that can withstand cross examination, informed by emerging legal scholarship on deepfake detection and evidential standards.
Evidence credentials with legal weight
Technically, Verify will expose an evidence credential service.
Every analysis yields a machine readable credential that includes:
-
cryptographic hashes of the media
-
metadata on origin, where available, including content credentials
-
the detector suite, version and calibration profile used
-
scores, likelihood ratios and explanation artefacts
These credentials are designed to align with European eIDAS trust concepts, so that they can be sealed, time stamped and integrated into chains of custody with legal presumptions of integrity and origin. Combined with the universal provenance trace used elsewhere in our stack, this makes forensic AI outputs traceable, testable and durable.
Chain of custody aware workflows
Courts and insurers do not think in single inferences. They think in timelines.
Verify therefore integrates forensic analysis with chain of custody modelling. Media items can be tracked across ingest, transformation, stamping, compression, re encoding and presentation. Content credentials based on the C2PA standard become part of that chain, providing a machine verifiable history of who did what, when, to which artefact.
By late 2026, our objective is to support real pilot cases where Verify generated credentials are introduced alongside traditional forensic reports, allowing courts to see how AI borne analysis can be grounded in legal trust frameworks rather than run parallel to them.
Persona in 2026: intelligent communications interface
Persona started as a push toward photorealistic avatars and expressive digital presenters. By mid 2025 it had already evolved into a modular interaction layer that can adapt tone, intent and non verbal behaviour to context. Clients are experimenting with it for onboarding, legal pre screens and multilingual instructional flows.
Winter 2025 marks the second architecture phase, and late 2026 is reserved for Persona’s public debut.
Context aware cinematics
In 2026 Persona’s camera and scene system becomes tightly coupled to structured knowledge and reasoning traces.
The system will not simply cut between shots on a fixed script. It will respond to the semantics of what is being explained:
-
close framing and slower pacing for definitions, risk disclosures and edge cases
-
document centric views when clauses, exhibits or screenshots are referenced
-
shared views that juxtapose data visualisations with narrative commentary for investigative or analytical content
Persona reads the same underlying provenance traces that Lexiana and Verify produce. That means it can highlight where an explanation hinges on a specific statute fragment, a particular evidential link or a key model decision. For internal training, this gives teams an instructor that literally shows its work.
Long form, emotionally calibrated delivery
Persona’s voice and delivery engine is being tuned for long form, dense material rather than short promotional clips.
In 2026 we are focusing on:
-
robust pronunciation of legal terminology and multi language content, initially Dutch, Spanish and English
-
stable prosody over twenty to forty minute explanations, with controlled variation to maintain attention without theatricality
-
subtle emotional signalling for sensitive topics such as fraud, regulatory breaches or litigation risk, where overacting is as damaging as monotony
Combined with the academy and our podcast work, Persona becomes the front end for an entire teaching and onboarding stack rather than a standalone video generator.
Decision support and synthetic reasoning with Inmeso
Our strategic collaboration with Inmeso Artificial Intelligence, formalised in 2025, is the backbone of our decision support and synthetic reasoning infrastructure.
The joint work concentrates on three architectural pillars that come into their own in 2026.
Multi context engines
The multi context engine is designed to ingest structured and unstructured data, maintain a live situational snapshot and propagate changes through that snapshot without losing traceability.
In practice this means combining:
-
regulatory and policy documents
-
internal procedures and knowledge bases
-
transactional and event streams from enterprise systems
-
human inputs such as memos, chat transcripts and meeting notes
The result is a dynamic state that can support Lexiana’s legal reasoning, Verify’s forensic narratives and internal decision flows, all with versioned reasoning chains and explicit sources.
Verified language pipelines
Verified language pipelines ensure that every inference step within these systems carries metadata suitable for later reconstruction.
This is not just for academic explainability. It supports:
-
AI Act technical documentation requirements for high risk systems
-
internal model governance, including regression analysis across model versions
-
external audit, where regulators or clients need to see how AI components behaved on specific cases
In 2026 these pipelines will be tightened and exposed more directly to clients through tools that resemble Glassbox style introspection environments, giving engineers and auditors fine grained visibility into what actually happened when an AI assisted decision was made.
Composable micro agents
Rather than pursuing a single “do everything” assistant, our guidance layer treats intelligence as a set of map aware micro agents.
Each agent is specialised: contract clause triage, statute change monitoring, risk scoring for a particular regulatory framework, evidence graph enrichment for a specific domain.
The guidance layer knows:
-
which agent to activate for a given user intent
-
which data views and tools to grant it
-
when to hand control to a human operator
By mid 2026 these micro agents will be embedded into enterprise pilots, especially in compliance, supervision and strategic planning workflows where traceable decision scaffolding is more valuable than full automation.
Academy, podcast and ecosystem
Tools alone are not enough. People need to understand how to use them, challenge them and design around them.
terlouw.academy in motion
The groundwork for terlouw.academy was laid throughout 2025. In 2026 it becomes a live learning environment focused on applied intelligence, not theory.
Initial tracks will cover:
-
applied legal AI with Lexiana, including high risk deployment patterns under the AI Act
-
AI provenance and forensic workflows using our Verify stack and content credential standards
-
system design for traceable, audited AI in regulated domains
-
agentic architectures with our compression and guidance frameworks
Courses will combine asynchronous modules, live workshops and assessments designed to match the real friction points we see in client deployments.
Weekly AI conversations
From February 2026 a weekly podcast co produced with Inmeso Insights will deliver raw, technical conversations on applied AI. In parallel, the Lexiana AI podcast will focus on the intersection of law, reasoning and automation, unpacking real workflows and regulatory developments.
By the end of 2026, these channels, together with academy modules and Persona based explainers, will form a coherent ecosystem. The same reasoning stacks that power client systems will power our own content.
New projects: a few signals
Some work in 2026 will remain deliberately quiet. A few strands are ready for a controlled reveal.
Project Atlas
Atlas is our internal name for the guidance layer that sits above our micro agents.
It builds a graph of domains, obligations, entities, datasets and tools. Instead of routing everything to a single model, Atlas can:
-
decompose a high level question into domain specific sub tasks
-
map each sub task onto the right agent, model and data slice
-
reassemble the outputs into a coherent, provenance rich answer
In 2026 Atlas lives mostly behind the scenes, orchestrating Lexiana, Verify and decision support components in enterprise pilots. Later it will surface in consoles that let clients see how their AI assisted workflows are wired.
Project Glassbox
Glassbox is an introspection environment built on top of our universal provenance trace.
It allows engineers, auditors and advanced users to:
-
replay reasoning sequences step by step
-
inspect intermediate retrieval sets and decision branches
-
compare model versions on the same evidence bundles
While the first Glassbox iterations will be internal, we expect early adopter clients in highly regulated sectors to start using it as part of their AI governance toolchain in 2026.
Project Continuum
Continuum addresses long lived context under strict data minimisation rules.
Instead of storing raw conversational or document history indefinitely, Continuum works with compressed, cryptographically anchored snapshots. These snapshots preserve meaning and relational structure, not verbatim text. When needed, the system can reconstruct a richer state from the snapshot and its provenance chain, then retire it again.
Continuum will begin as an internal capability for managing context in Lexiana and in our decision engines. Over time it can become a differentiator for clients who need continuity without building permanent data lakes of sensitive material.
Execution plan for 2026
The winter 2025 update already sketched the outline. In 2026 we will execute along the following track.
Early 2026
-
launch of the Lexiana AI podcast and the weekly AI systems podcast with Inmeso Insights
-
release of Lexiana 3.5, including the macOS application, upgraded retrieval and clause reasoning engines and the first iteration of instrumented reasoning
-
rollout of the universal provenance trace framework across Lexiana, Verify prototypes and internal decision engines
Mid 2026
-
deployment of the multi context engine and composable micro agents into selected enterprise pilots in legal, compliance and supervision
-
expansion of Lexiana’s multilingual and multi jurisdiction capabilities, with a focus on pan European uptake
-
publication of a detailed technical paper on explainable legal inference and instrumented reasoning in production systems
Late 2026
-
public unveiling of Persona as an intelligent communications interface, integrated with Lexiana, Verify and academy content
-
introduction of client facing Glassbox tooling for AI auditing and introspection
-
demonstration of a unified Terlouw ecosystem in real scenarios: Lexiana for reasoning, Verify for evidence, Persona for communication, Atlas and Continuum for orchestration and context, academy and podcasts for skills and culture
We are not optimising for the loudest announcements. We are optimising for systems that will still make sense when a regulator, a general counsel or a judge revisits a 2026 decision years later and asks the only question that really matters.