What AI Can Tell You About Your Authorization Policies
AI shouldn’t decide who can access what, but it can help you understand what the system already allows. Used as an auditor or reviewer, AI becomes a lens for exposing scope, risk, and undocumented assumptions in authorization systems.
In the previous post, I showed how AI can help with policy authoring and analysis by accelerating the back-and-forth between intent and implementation. That workflow is exploratory by nature. You ask why something happens, how it could change, and which formulation best expresses intent.
Review and audit are different.
In review and audit, the intent is assumed to already exist. The policies are fixed. The question is no longer how authority should be expressed, but how it is already expressed and whether that expression can be understood, defended, and justified.
This difference matters because it changes how AI should be used. In authoring, AI is invited to explore alternatives. In audit, that permission must be taken away. The AI’s role shifts from collaborator to examiner: explaining behavior, enumerating scope, and surfacing consequences without proposing changes. The goal of a policy audit is not to optimize policies or propose fixes, but to understand what the current policy set allows, how broad that access is, and whether it can be defended as intentional.
Same Repository, Different Posture
To make that distinction concrete, this post uses the same acme-cedar-ai-authoring repository introduced in the authoring and analysis post. The schema, policies, and entity data are unchanged.
What has changed is how they are treated. In authoring mode, the repository is a workspace for exploration. In audit mode, it is treated as read-only evidence. The AI is not asked how to refactor policies or how to tighten access. It is asked to explain what the current policy set actually allows, and how broad those allowances are in practice. This distinction is subtle but important. Using the same artifacts makes it clear that review and audit do not require new tools or new models, only a different posture. The difference shows up not only in the questions that are asked but also in the constraints placed on the AI through the starter prompt.
In the authoring workflow, the prompt gives the AI permission to explore. It can propose alternatives, suggest refactors, and reason about hypothetical changes. That freedom is what makes authoring productive. That same freedom would be inappropriate, even dangerous, in an audit context.
The audit prompt contrains the AI. Instead of granting capabilities, it removes them. The audit prompt explicitly instructs the AI to treat the schema, policies, and entities as authoritative and fixed. It forbids proposing policy changes, refactors, or improvements. It prohibits inventing new entities, actions, or attributes. And it reframes the AI’s role as explanatory rather than creative.
What the AI is allowed to do is deliberately narrow:
explain why specific requests are permitted or denied
enumerate which principals can perform which actions on which resources
identify broad or surprising access paths
summarize access in plain language, suitable for review or audit
The prompt does not determine access or scope data. Instead, it enforces role discipline. It ensures the AI behaves like a reviewer, not a designer. That distinction is critical. In audit mode, the most valuable thing an AI can do is not suggest how to improve the system, but help humans understand what the system already does and what that implies.
With the posture and constraints established, the next step is to see what an audit actually looks like in practice. What follows is an example policy audit conducted using the same repository and a constrained audit prompt, focusing entirely on explanation, enumeration, and risk assessment.
A Concrete Policy Audit Walkthrough
With the audit posture and constraints in place, I started by asking simple, concrete questions and then gradually pushed on scope, risk, and defensibility. At no point was the AI asked to suggest changes, only to explain what the current policy set actually allows.
Establishing an Access Baseline
To get started, I asked the following question:
What can Kate actually do?
The AI began by grounding its answer in the schema and entity data. Kate is a customer, not an employee, and that immediately limits her action set. Based on the current policies, she can view the q3-plan document because she is a member of the document’s customer_readers_team (acme-entities encodes that). That relationship is explicitly referenced in the customer view policy.
Just as importantly, the AI was clear about what Kate cannot do. She cannot edit or share documents, because those actions are restricted to employee principals by the schema. This initial response wasn’t surprising, but that’s the point. Audit starts by establishing a factual baseline before moving on to harder questions.
Expanding the View: Who Can See This Document?
Next, I widened the lens from a single principal to a single resource:
Who can view q3-plan?
This time, the AI enumerated every principal who has view access to the document and explained why each one is permitted. The list was broader than just customers. The document owner can view it. Employees on the document’s employee readers team can view it. The owner’s manager can view it. Customers on the customer readers team can view it as well.
The response also surfaced an important distinction. Employee access is constrained by a managed-device requirement, enforced by a forbid policy. Customer access is not. By the end of this step, there was a complete and explainable exposure map for the document without hypotheticals or changes. Just a clear picture of who can see the document and under what conditions.
Surfacing Broader-Than-Expected Access Paths
With the basic exposure established, I asked a more probing question:
Are there any ways this access could be broader than expected?
Here, the AI shifted from listing individual cases to identifying patterns. Several broad access paths emerged. Managers can view all documents owned by their direct reports, regardless of document type or sensitivity. Any employee in a readers team can share a document marked as delegatable, even if they are not the owner. Team membership grants access across all documents that reference that team. Customers are not subject to device restrictions.
None of these behaviors are accidental. They follow directly from the policies as written. But seeing them described together makes their implications much clearer. This is a nice finding to surface in an audit: access that feels reasonable in isolation can look much broader when viewed as a system.
Stress-Testing Assumptions with a Realistic Scenario
To make those implications concrete, I posed a deliberately uncomfortable scenario:
If Alice put a letter to HR citing inappropriate action by her manager in the system, would Carol be able to read it? And even edit it?
The AI answered carefully and precisely. Under the current policies, Carol—Alice’s manager—can view the document because managers are allowed to view all documents owned by their direct reports. However, Carol cannot edit or share it. Those actions are restricted to document owners or, in some cases, delegated team members. This distinction matters. The policy does not give managers full control, but it does give them visibility. That may be intentional, but it has consequences.
I followed up with an even sharper question:
Is there any way with the current policy set for Alice to avoid this?
The answer was unambiguous: no. Manager access is unconditional. Alice cannot opt out of it through document attributes or team membership. This is not a bug or an edge case. but rather exactly how authority is expressed in the policy.
From Behavior to Defensibility
At this point, the conversation shifted from behavior to intent:
What evidence would an auditor need to justify this behavior as intentional?
The AI’s response no longer focused on policy logic alone. Instead, it described the kinds of artifacts an auditor would expect to see: policy comments explaining the rationale, test cases explicitly validating manager access, design documentation, stakeholder approval, and security review notes.
Crucially, it then observed that none of those artifacts exist in the repository. The finding isn’t that the policy is wrong, but that its intent is undocumented. Without evidence that this access pattern was deliberately designed, an auditor would likely flag it as potentially unintended broad access rather than a conscious trade-off.
Evaluating a Common Defense
A common response to scenarios like this is to narrow the scope of the system itself, so I asked:
How would you respond to the claim that this platform isn’t designed to house HR or other documents that should be kept confidential from an employee’s manager?
The AI evaluated that defense against the implementation. While the platform description emphasizes collaboration, the schema defines a generic document model with no type restrictions. A classification attribute exists, but policies do not use it. There are no validations or documented exclusions preventing sensitive documents from being stored.
The conclusion was measured but pointed. The defense is plausible, but it is not substantiated by the implementation. As the AI summarized, the absence of enforcement or documentation makes this look less like an intentional design constraint and more like a retroactive justification.
What this Example Shows
Taken together, this walkthrough illustrates what audit mode looks like in practice. The AI never proposes a policy change. It never suggests a refactor. Instead, it helps surface scope, risk, and undocumented assumptions by explaining what the system already allows. In review and audit, that kind of clarity is far more valuable than creativity.
Audit Is About Clarity, not Creativity
Policy audits are not design exercises. They are about understanding what authority has already been encoded, how broad that authority really is, and whether it can be defended as intentional.
Used correctly, AI is well suited to this work. When constrained to only explain and enumerate, it becomes a powerful lens for surfacing access paths, stress-testing assumptions, and exposing gaps between implementation and documentation. What it does not do is redesign policy on the fly.
The same model that accelerates authoring becomes valuable in audit only when its freedom is reduced. That constraint is not a limitation; it is what makes the AI a trustworthy reviewer. By separating exploration from verification, and creativity from accountability, teams can use AI to gain confidence in their authorization systems without surrendering control.
In audit mode, AI doesn’t decide what should change. It helps you see, clearly and sometimes uncomfortably, what the system actually allows.
Photo Credit: Inspecting with the help of AI from DALL-E (public-domain)


