Policy Authoring and Analysis with AI
In my last post, I argued that policy does not belong in an LLM prompt. Authorization is about authority and scope, not about persuading a language model to behave. Prompts express intent; policies define what is allowed. Mixing the two creates systems that are brittle at best and dangerous at worst.
That raises the obvious follow-up question: So where can AI actually help?
The answer, in practice, is policy authoring and policy analysis. This doesn’t show up in architectural diagrams, but in the day-to-day work of writing, reviewing, and changing policies. What surprised me while working through this material is how tightly those two activities are coupled in practice.
Where AI Can Help
In real systems, policy authoring rarely starts with code. Instead, it often starts with questions:
Why is this request allowed?
What would cause it to be denied?
How narrow is this rule, really?
What happens if I change just this one thing?
Those are analysis questions, but they arise before and during authoring, not after. As soon as you start writing or modifying policies, you’re already analyzing them. AI tools are well-suited to this part of the work. They can:
Explain existing policy behavior in plain language
Say why access will be allowed or denied in specific scenarios
Propose alternative formulations
Surface edge cases and trade-offs you might miss
They are not deciding access. Rather, they’re helping you reason about policies that remain deterministic and externally enforced.
A Concrete Place to Start
To help make this clearer, I put together a small GitHub repository that you can use to work through this yourself. The repository reuses the ACME Cedar schema and policies I used for examples in Appendix A of my book, Dynamic Authorization. This repo adds just enough structure to support hands-on, AI-assisted work. If you explore it, three things are worth calling out early:
ai/cursor/README.mdexplains how the repo is meant to be used and, just as importantly, what it is not for.ai/cursor/authoring-guidelines.mdlays out the human-in-the-loop constraints. These aren’t optional suggestions; they’re the safety rails.ai/cursor/starter-prompt.mddefines how the AI is expected to behave.
That starter prompt matters more than it might seem. It’s not there for convenience. It shapes how the AI interprets context, authority, and its own role. Rather than expressing authorization rules, the starter prompt limits the AI’s scope of participation: it can propose, explain, and compare policy options, but it cannot invent model elements or make decisions.
Authoring and Analyzing are Complementary Activities
When working with real authorization policies, authoring and analysis are best understood as complementary activities rather than separate phases. You do not finish writing a policy and then analyze it later. Instead, analysis continuously shapes how policies are authored, refined, and understood.
That interplay becomes clear as soon as you start with a concrete request, such as:
{
“principal”: “User::\”kate\”“,
“action”: “Action::\”view\”“,
“resource”: “Document::\”q3-plan\”“
}The first step is analytical. Before changing anything, you need to establish the current behavior. Asking why this request is permitted forces the existing policy logic into the open. A useful explanation should reference a specific policy and identify the relationship or condition on the resource that makes the request valid. This establishes the current behavior before attempting to change it.
Once that behavior is understood, authoring questions follow naturally:
What would need to change for this request to be denied?
How could that change be made while leaving other customer access unchanged?
Where should that change live so that intent remains clear and the policy set remains maintainable?
These questions blur any clean separation between authoring and analysis. Understanding current behavior is analysis. Exploring how a specific outcome could change is authoring. In practice, the two alternate rapidly, each shaping the other.
AI assistance fits naturally into this loop. It can explain existing decisions, propose multiple ways to achieve a different outcome, and help compare the implications of those alternatives. For a narrowly scoped change like this one, those alternatives might include introducing a new forbid policy, narrowing an existing permit policy, or expressing the exception explicitly using an unless clause.
What matters is not that the AI can generate these options, rather it’s that a human evaluates them. Although the alternatives may be functionally equivalent, they differ in clarity, scope, and long-term maintainability. Choosing between them is a design decision, not a mechanical one.
AI accelerates the conversation between authoring and analysis, making both activities more explicit and more efficient, while leaving responsibility for authorization behavior firmly with the human author.
The Human in the Loop
When using AI to assist with policy work, the most important discipline is how you engage with it. The value comes not from asking for answers, but from asking the right sequence of questions, and reviewing the results critically at each step.
Begin by asking the AI to explain the system’s current behavior. With the schema, policies, entities, and a concrete request included as context, ask a question such as:
“Which policy or policies permit this request, and what relationship on the resource makes that true?”
Review the response carefully. A good answer should reference a specific policy and point to a concrete condition. In the case of the example in the repo, you might get an answer that references membership in a reader relationship on the document. If the response is vague, or if it invents attributes or relationships that do not exist in the model, stop and correct the context before proceeding. That failure is a signal that the AI is reasoning without sufficient grounding.
Next, ask the AI to restate the authorization logic in plain language. For example:
“Explain this authorization decision as if you were describing it to a product manager.”
This step is critical. It tests whether the policy logic aligns with human intent. If the explanation is surprising or difficult to defend, that is not a problem with the explanation, it is a signal that the policy itself deserves closer scrutiny.
Once you understand the current behavior, introduce a small hypothetical change. Without modifying anything yet, ask a question like:
“What change would be required to deny this request while leaving other customer access unchanged?”
The AI may respond in several ways. One common suggestion is to add a new forbid policy that explicitly denies the request. That can be a valid approach in some situations, but it is rarely the only option, and it is often worth exploring alternatives before expanding the policy set.
You can then refine the discussion with a follow-up question:
“What if instead of adding a new policy, we wanted to modify one of the existing policies to do this?”
In response, the AI may suggest modifying an existing permit policy by adding an additional condition to its when clause, typically an extra conjunction in the context section of the policy that explicitly excludes this principal and resource. This narrows the circumstances under which the permit applies without introducing a new rule.
You can refine the design further by asking:
“What if I wanted to do this by adding an
unlessclause instead of putting a conjunction in the when clause?”
The AI may then refactor the proposal to use an unless clause that expresses the exception more directly. In many cases, this reads more clearly, especially when the intent is to describe a general rule with a specific carve-out.
At this point, it is tempting to treat these alternatives as interchangeable. They may be syntactically valid and semantically equivalent for a specific request, but they are not equivalent from a design perspective. Choosing between a new forbid policy, a narrower when clause, or a more readable unless clause is a human judgment about clarity, intent, and long-term maintainability. These are decisions about how authority should be expressed, not questions a language model can answer on its own.
This sequence illustrates the core of a human-in-the-loop workflow. The AI can generate options, surface trade-offs, and refactor logic, but it does not decide which policy best reflects organizational intent. The final responsibility for authorization behavior remains with the human reviewer, who must understand and accept the consequences of each change before it is applied.
Guardrails that Make AI Assistance Safe
When AI is embedded directly into the policy authoring and analysis loop, guardrails are not optional. They are what keep the speed and convenience of AI from turning into silent expansion of authority.
In practice, many of these guardrails are enforced through the starter prompt itself. The prompt establishes how the AI is expected to behave, what it may assume, and what it must not invent. The remaining guardrails are enforced through human review.
Treat the Schema as the Source of Truth
The starter prompt explicitly instructs the AI to treat the schema and existing policies as the source of truth. This is essential. The schema defines the universe of valid entities, actions, attributes, and relationships. Any suggestion that relies on something outside that schema is wrong by definition.
If an AI response introduces a new attribute, relationship, or entity that does not exist, stop immediately. That is not a creative proposal—it is a modeling error.
Require concrete requests and outcomes
The starter prompt requires the AI to reason about concrete requests and expected outcomes rather than abstract policy logic. This forces proposed changes to be evaluated in terms of actual behavior:
Why is this request permitted?
What change would cause it to be denied?
What other requests would be affected?
Anchoring discussion in concrete requests makes unintended scope expansion easier to spot.
Bias Toward Least Privilege
The starter prompt biases the AI toward least-privilege outcomes and narrowly scoped changes. Without this bias, AI tools often propose solutions that technically satisfy the question but widen access more than intended.
Broad refactors and sweeping rules should be treated with skepticism unless they are clearly intentional and carefully reviewed.
Separate Exploration from Acceptance
The starter prompt makes it clear that AI output is advisory. The AI can propose, explain, and refactor policy logic, but it does not apply changes or decide which alternative is correct.
Every proposed change must be reviewed manually, line by line, and evaluated in the context of the full policy set. If a change cannot be explained clearly in plain language, it should not be accepted.
Preserve Human Accountability
Authorization policies express decisions about authority, and those decisions have real consequences. The starter prompt reinforces that responsibility for those decisions remains with the human author.
The policy engine evaluates access deterministically, but humans remain accountable for what that access allows or denies. If you would not be comfortable explaining a policy change to an auditor or stakeholder, that discomfort is a signal to revisit the design.
Where AI Belongs—and Where it Doesn’t
Like I emphasized in my previous post, don’t use AI to decide who is allowed to do what. Authorization is about authority, scope, and consequence, and those decisions must remain deterministic, reviewable, and enforceable outside of any language model.
But AI is a great tool for policy authoring and analysis. Used correctly, it helps surface intent, explain behavior, and explore design alternatives faster than humans can alone. It makes the reasoning around policy more explicit, not less.
But that benefit only materializes when boundaries are clear. Prompts must not encode access rules. Schemas must remain the source of truth. Concrete requests must anchor every discussion. And humans must remain accountable for every change that affects authority.
AI can accelerate policy work, but it cannot take responsibility for it. Treat it as a powerful assistant in design and analysis, and keep it far away from enforcement and decision-making. That separation is not a limitation—it’s what makes AI useful without making it dangerous.
Photo Credit: Happy computer aiding in policy authoring and analysis from DALL-E (public domain)


