Best practices and recipes
A collection of tips and code snippets designed to help you write cleaner, more optimised Cerbos policies.
Modelling policies
With Cerbos, access rules are always resource-oriented and the policies you write map to these resources within your system. A resource can be anything, and the way you model your policies is up you — you can achieve the same logical outcome in numerous ways; action-led, role-led, attribute-led, or with combinations thereof.
That said, some patterns will lend themselves more naturally to certain scenarios — let’s take a look at some different approaches. Consider this business model:
Actions |
Roles |
||||
IT_ADMIN |
JR_MANAGER |
SR_MANAGER |
USER |
CFO |
|
---|---|---|---|---|---|
run |
x |
x |
x |
||
view |
x |
x |
x |
x |
x |
edit |
x |
x |
|||
save |
x |
x |
|||
share |
x |
x |
x |
Representing this as a resource policy could be achieved in a variety of ways. Let’s take a look at each:
Action-led
Here, we focus on an action, and list all the roles that can perform that action:
# Principals in the following three roles can perform the `run` action
- actions:
- "run"
effect: EFFECT_ALLOW
roles:
- JR_MANAGER
- SR_MANAGER
- CFO
# All principals can perform the `view` action
- actions:
- "view"
effect: EFFECT_ALLOW
roles:
- ["*"]
This approach might be suitable if any of the following apply to your system:
-
Your roles are "similar" in what they can do like
JR_MANAGER
andSR_MANAGER
; it’s likely thatJR_MANAGER
will have a subset of the permissions ofSR_MANAGER
. There will of course be duplication in either direction, but it’s often easier to reason about this from an action perspective. -
You have "high-risk" actions — you want to be able to tell at a glance which roles have access to a particular action. The act of explicitly listing roles per action makes it much more difficult to accidentally give unwanted permissions to the wrong user.
-
You have a relatively high number of roles to a low number of actions.
Role-led
Alternatively, we can focus on a role, and list all the actions the role can perform:
# These three actions can be performed by principals in the `JR_MANAGER` role
- actions:
- "run"
- "view"
- "share"
effect: EFFECT_ALLOW
roles:
- JR_MANAGER
You might opt for a role-led approach if:
-
You have distinct roles where it’s rare for your roles to share common actions.
-
You have a relatively low number of roles to a high number of actions.
Hybrid
Perhaps we want to use a combination of the two:
# Principals in the `SR_MANAGER` or `CFO` roles can perform all actions
- actions:
- "*"
effect: EFFECT_ALLOW
roles:
- SR_MANAGER
- CFO
This might apply if your scenario doesn’t strictly fall into one of the previous two sections; individually, or at all.
Blanket allow, granular deny
We can opt to explicitly state which actions a user cannot do:
# Principals in the `JR_MANAGER` role can perform all actions, other than `edit` and `save`
- actions:
- "*"
effect: EFFECT_ALLOW
roles:
- "JR_MANAGER"
- actions:
- "edit"
- "save"
effect: EFFECT_DENY
roles:
- "JR_MANAGER"
This would suit scenarios where a principal can perform nearly every action, and you want to explicitly list disallowed actions.
Attribute-led
Consider the following hypothetical scenario:
Given the dynamic nature of audiences, it’s not practical to enumerate all roles that have access. What we could do instead is to globally allow all roles and actions and then determine access based on attributes passed in the JWT. Take a look at the following example policy:
apiVersion: api.cerbos.dev/v1
resourcePolicy:
resource: "data_set"
version: default
rules:
- actions: ["*"]
roles: ["*"]
effect: EFFECT_ALLOW
condition:
match:
all:
of:
- expr: has(request.aux_data.jwt.aud)
- expr: >
"my.custom.audience" in request.aux_data.jwt.aud
In the above, we blanket-allow all actions and roles, but specifically rely on the aud
key parsed from the JWT to determine access.
Adding self-service custom roles
Imagine this scenario: you’re an admin in a multi-tenant system, and you want a method by which you can copy an existing role, and then select which permissions/actions to enable or disable for each.
There are two ways of approaching this:
Static Policies / Dynamic Context
This is the idiomatic way of solving this use-case in Cerbos. In the vast majority of cases, it is possible to have the policies statically defined and to pass in dynamic context as attributes of a principal. This dynamic context can be any arbitrary data such as the principal’s location, age, or specific roles it has within the context of an organizational unit (a department, a tenant or a project, for example). This contextual data would be retrieved at request time from another service or a data store. Let’s look at an example.
Here is a resource policy for a resource of type "workspace"
:
apiVersion: "api.cerbos.dev/v1"
resourcePolicy:
version: "default"
resource: "workspace"
rules:
- actions:
- workspace:view
- pii:view
effect: EFFECT_ALLOW
roles:
- USER
condition:
match:
expr: P.attr.workspaces[R.id].role == "OWNER"
Notice how the condition relies on context passed in within the P.attr.workspaces
map, with the key being the resource ID, and the value being a predefined value "OWNER"
. We can grant access to a principal with the USER
role, by constructing the following request payload:
cat <<EOF | curl --silent "http://localhost:3592/api/check/resources?pretty" -d @-
{
"requestId": "quickstart",
"principal": {
"id": "123",
"roles": [
"USER"
],
"attr": {
"workspaces": {
"workspaceA": {
"role": "OWNER"
},
"workspaceB": {
"role": "MEMBER"
}
}
}
},
"resources": [
{
"actions": [
"workspace:view",
"pii:view"
],
"resource": {
"id": "workspaceA",
"kind": "workspace"
}
},
{
"actions": [
"workspace:view",
"pii:view"
],
"resource": {
"id": "workspaceB",
"kind": "workspace"
}
}
]
}
EOF
using Cerbos.Sdk.Builders;
using Cerbos.Sdk;
internal class Program
{
private static void Main(string[] args)
{
var client = new CerbosClientBuilder("http://localhost:3593").WithPlaintext().BuildBlockingClient();
string[] actions = { "workspace:view", "pii:view" };
CheckResourcesResult result = client
.CheckResources(
Principal.NewInstance("123", "USER")
.WithAttribute("workspaces", AttributeValue.MapValue(new Dictionary<string, AttributeValue>()
{
{
"workspaceA", AttributeValue.MapValue(new Dictionary<string, AttributeValue>()
{
{"role", AttributeValue.StringValue("OWNER")}
})
},
{
"workspaceB", AttributeValue.MapValue(new Dictionary<string, AttributeValue>()
{
{"role", AttributeValue.StringValue("MEMBER")}
})
}
})),
ResourceAction.NewInstance("workspace", "workspaceA")
.WithActions(actions),
ResourceAction.NewInstance("workspace", "workspaceB")
.WithActions(actions)
);
foreach (string n in new string[] { "workspaceA", "workspaceB" })
{
var r = result.Find(n);
Console.Write(String.Format("\nResource: {0}\n", n));
foreach (var i in r.GetAll())
{
String action = i.Key;
Boolean isAllowed = i.Value;
Console.Write(String.Format("\t{0} -> {1}\n", action, isAllowed ? "EFFECT_ALLOW" : "EFFECT_DENY"));
}
}
}
}
package main
import (
"context"
"log"
"github.com/cerbos/cerbos-sdk-go/cerbos"
)
func main() {
c, err := cerbos.New("localhost:3593", cerbos.WithPlaintext())
if err != nil {
log.Fatalf("Failed to create client: %v", err)
}
principal := cerbos.NewPrincipal("123", "USER")
// We use map[string]interface{} as strictly typed nested maps aren't supported
principal.WithAttr("workspaces", map[string]map[string]interface{}{
"workspaceA": map[string]interface{}{
"role": "OWNER",
},
"workspaceB": map[string]interface{}{
"role": "MEMBER",
},
})
kind := "workspace"
actions := []string{"workspace:view", "pii:view"}
batch := cerbos.NewResourceBatch()
batch.Add(cerbos.NewResource(kind, "workspaceA"), actions...)
batch.Add(cerbos.NewResource(kind, "workspaceB"), actions...)
resp, err := c.CheckResources(context.Background(), principal, batch)
if err != nil {
log.Fatalf("Failed to check resources: %v", err)
}
log.Printf("%v", resp)
}
package demo;
import static dev.cerbos.sdk.builders.AttributeValue.mapValue;
import static dev.cerbos.sdk.builders.AttributeValue.stringValue;
import java.util.Map;
import dev.cerbos.sdk.CerbosBlockingClient;
import dev.cerbos.sdk.CerbosClientBuilder;
import dev.cerbos.sdk.CheckResult;
import dev.cerbos.sdk.builders.Principal;
import dev.cerbos.sdk.builders.ResourceAction;
public class App {
public static void main(String[] args) throws CerbosClientBuilder.InvalidClientConfigurationException {
CerbosBlockingClient client=new CerbosClientBuilder("localhost:3593").withPlaintext().buildBlockingClient();
for (String n : new String[]{"workspaceA", "workspaceB"}) {
CheckResult cr = client.batch(
Principal.newInstance("123", "USER")
.withAttribute("workspaces", mapValue(Map.of(
"workspaceA", mapValue(Map.of(
"role", stringValue("OWNER")
)),
"workspaceB", mapValue(Map.of(
"role", stringValue("MEMBER")
))
)))
)
.addResources(
ResourceAction.newInstance("workspace","workspaceA")
.withActions("workspace:view", "pii:view"),
ResourceAction.newInstance("workspace","workspaceB")
.withActions("workspace:view", "pii:view")
)
.check().find(n).orElse(null);
if (cr != null) {
System.out.printf("\nResource: %s\n", n);
cr.getAll().forEach((action, allowed) -> { System.out.printf("\t%s -> %s\n", action, allowed ? "EFFECT_ALLOW" : "EFFECT_DENY"); });
}
}
}
}
const { GRPC: Cerbos } = require("@cerbos/grpc");
const cerbos = new Cerbos("localhost:3593", { tls: false });
(async() => {
const kind = "workspace";
const actions = ["workspace:view", "pii:view"];
const cerbosPayload = {
principal: {
id: "123",
roles: ["USER"],
attributes: {
workspaces: {
workspaceA: {
role: "OWNER",
},
workspaceB: {
role: "MEMBER",
}
},
},
},
resources: [
{
resource: {
kind: kind,
id: "workspaceA",
},
actions: actions,
},
{
resource: {
kind: kind,
id: "workspaceB",
},
actions: actions,
},
],
};
const decision = await cerbos.checkResources(cerbosPayload);
console.log(decision.results)
})();
<?php
require __DIR__ . '/vendor/autoload.php';
use Cerbos\Sdk\Builder\CerbosClientBuilder;
use Cerbos\Sdk\Builder\Principal;
use Cerbos\Sdk\Builder\ResourceAction;
use Symfony\Component\HttpClient\HttplugClient;
$clientBuilder = new CerbosClientBuilder("http://localhost:3592", new HttplugClient(), null, null, null);
$client = $clientBuilder->build();
$principal = Principal::newInstance("123")
->withRole("USER")
->withAttribute("workspaces", [
"workspaceA" => [
"role" => "OWNER"
],
"workspaceB" => [
"role" => "MEMBER"
]
]);
$type = "workspace";
$resourceAction1 = ResourceAction::newInstance($type, "workspaceA")
->withAction("workspace:view")
->withAction("pii:view");
$resourceAction2 = ResourceAction::newInstance($type, "workspaceB")
->withAction("workspace:view")
->withAction("pii:view");
$checkResourcesResult = $client->checkResources($principal, array($resourceAction1, $resourceAction2), null, null);
echo json_encode($checkResourcesResult, JSON_PRETTY_PRINT);
?>
import json
from cerbos.sdk.client import CerbosClient
from cerbos.sdk.model import Principal, Resource, ResourceAction, ResourceList
from fastapi import HTTPException, status
principal = Principal(
"123",
roles=["USER"],
attr={
"workspaces": {
"workspaceA": {
"role": "OWNER",
},
"workspaceB": {
"role": "MEMBER",
}
}
}
)
actions = ["workspace:view", "pii:view"]
resource_list = ResourceList(
resources=[
ResourceAction(
Resource(
"workspaceA",
"workspace",
),
actions=actions,
),
ResourceAction(
Resource(
"workspaceB",
"workspace",
),
actions=actions,
),
],
)
with CerbosClient(host="http://localhost:3592") as c:
try:
resp = c.check_resources(principal=principal, resources=resource_list)
resp.raise_if_failed()
except Exception:
raise HTTPException(
status_code=status.HTTP_403_FORBIDDEN, detail="Unauthorized"
)
print(json.dumps(resp.to_dict(), sort_keys=False, indent=4))
# frozen_string_literal: true
require "cerbos"
require "json"
client = Cerbos::Client.new("localhost:3593", tls: false)
kind = "workspace"
actions = ["workspace:view", "pii:view"]
r1 = {
kind: kind,
id: "workspaceA"
}
r2 = {
kind: kind,
id: "workspaceB"
}
decision = client.check_resources(
principal: {
id: "123",
roles: ["USER"],
attributes: {
workspaces: {
workspaceA: {
role: "OWNER"
},
workspaceB: {
role: "MEMBER"
}
}
}
},
resources: [
{
resource: r1,
actions: actions
},
{
resource: r2,
actions: actions
}
]
)
puts JSON.pretty_generate({
results: [
{
resource: r1,
actions: {
"workspace:view": decision.allow?(resource: r1, action: "workspace:view"),
"pii:view": decision.allow?(resource: r1, action: "pii:view")
}
},
{
resource: r2,
actions: {
"workspace:view": decision.allow?(resource: r2, action: "workspace:view"),
"pii:view": decision.allow?(resource: r2, action: "pii:view")
}
}
]
})
use cerbos::sdk::attr::{attr, StructVal};
use cerbos::sdk::model::{Principal, Resource, ResourceAction, ResourceList};
use cerbos::sdk::{CerbosAsyncClient, CerbosClientOptions, CerbosEndpoint, Result};
#[tokio::main]
async fn main() -> Result<()> {
let opt =
CerbosClientOptions::new(CerbosEndpoint::HostPort("localhost", 3593)).with_plaintext();
let mut client = CerbosAsyncClient::new(opt).await?;
let principal = Principal::new("123", ["USER"]).with_attributes([attr(
"workspaces",
StructVal([
("workspaceA", StructVal([("role", "OWNER")])),
("workspaceB", StructVal([("role", "MEMBER")])),
]),
)]);
let actions: [&str; 2] = ["workspace:view", "pii:view"];
let kind = "workspace";
let resp = client
.check_resources(
principal,
ResourceList::new_from([
ResourceAction(Resource::new("workspaceA", kind), actions),
ResourceAction(Resource::new("workspaceB", kind), actions),
]),
None,
)
.await?;
println!("{:?}", resp.response);
Ok(())
}
You can find a full (and extended) example of the above in our SaaS Workspace Policy playground example.
Dynamic Policies
There might be circumstances where you want to create or update resources and actions on the fly; an example of this might be a multi-tenant platform that provides tenants the ability to manage their own policies.
If this is the case, then you can use the Admin API configured alongside a mutable database storage engine to provide this functionality. This would be handled within your application layer, with the desired policy contents provided to the PDP via the API.
For a full example implementation, check out this demo.
Policy repository layout
Cerbos expects the policy repository to have a particular directory layout.
-
The directory must only contain Cerbos policy files, policy test files and schemas. Any other YAML or JSON files will cause Cerbos to consider the policy repository as invalid.
-
If you use schemas, the
_schemas
directory must be a top-level directory at the root of the policy repo. -
All policy tests must have a file name ending in
_test
and a.yaml
,.yml
or.json
extension. -
Directories named
testdata
can be used to store test data for policy tests. Cerbos will not attempt to locate any policy files inside those directories. -
Hidden files and directories (names starting with
.
) are ignored.
A typical policy repository might resemble the following:
. ├── _schemas │ ├── principal.json │ └── resources │ ├── leave_request.json │ ├── purchase_order.json │ └── salary_record.json ├── derived_roles │ ├── backoffice_roles.yaml │ └── common_roles.yaml ├── principal_policies │ └── auditor_audrey.yaml └── resource_policies ├── finance │ ├── purchase_order.yaml │ └── purchase_order_test.yaml └── hr ├── leave_request.yaml ├── leave_request_test.yaml ├── salary_record.yaml ├── salary_record_test.yaml └── testdata ├── auxdata.yaml ├── principals.yaml └── resources.yaml