Century Update: Citations and IAM Controls in Daily Operations
A product update on how Century handles citation quality and IAM-aware behavior across operational workflows.
Century Update: Citations and IAM Controls in Daily Operations
In high-volume policy and support workflows, two things determine whether an AI assistant is operationally usable:
- Citations that are consistent and reviewable
- IAM controls that match how teams actually work
Century is positioned as a controlled, secure enterprise platform for LLM adoption in regulated contexts, with source verification, logging/audit, and perimeter deployment as core design points.
This update focuses on making citations and IAM boundaries behave predictably in daily operations—so teams spend less time in review loops and more time shipping improvements.
What's improved
1. Stronger citation consistency for high-throughput workflows
When citations are inconsistent, reviews fail for avoidable reasons:
- missing links,
- unstable source references,
- citations that don't match the claim scope.
The update tightens citation behavior so that operational teams can rely on a stable evidence format:
- each answer carries a consistent "source package" (doc/table excerpt + ID),
- citation coverage is treated as a first-class quality signal,
- responses default to "no source → no claim" for knowledge assertions.
This aligns with Century's core "answers strictly based on sources" approach.
2. IAM-aware retrieval with stricter boundary checks across teams
Cross-team data access is where enterprise assistants typically fail: the system retrieves "technically relevant" content that is organizationally off-limits.
This update strengthens IAM-aware retrieval so that:
- retrieval scope is explicitly tied to role boundaries,
- context assembly is segmented by permissions,
- citations never expose references to inaccessible materials.
Secure LLM perimeter design is explicitly built around RBAC and auditability, not informal conventions.
Operational impact
Fewer review cycles caused by missing references
Teams reviewing policy/support outputs spend less time asking "where did this come from?" and more time validating substance—because the evidence format is consistent.
Cleaner role-based separation of answer context
When IAM boundaries are enforced end-to-end, the assistant's behavior becomes predictable:
- users see only what they're allowed to use,
- reviewers can validate access correctness with logs/traces,
- compliance teams get a clearer story for sign-off.
More predictable "audit packaging" for monthly reviews
With stable citation packages + trace metadata, monthly audits become packaging work, not forensic work:
- request/response logs,
- trace IDs,
- evaluation snapshots,
- release gate records.
Century's emphasis on logging/audit and observability is specifically aimed at making AI manageable and reviewable.
Suggested rollout pattern (so operations feel it immediately)
- Enable citation consistency rules on the highest-volume workflows first (policy Q&A, support macros).
- Run role-based retrieval tests with real team boundaries (positive/negative).
- Add a monthly "governance drift" review: prompts + policies + corpora + eval sets as one release unit.