Painless access control for your software ==================== Cerbos helps you super-charge your authorization implementation by writing context-aware access control policies for your application resources. Author access rules using an intuitive YAML configuration language, use your Git-ops infrastructure to test and deploy them and, make simple API requests to the Cerbos PDP to evaluate the policies and make dynamic access decisions. ![How Cerbos works](_images/how_cerbos_works.png) Iterate quickly Instantly update your access policies without re-compiling or re-deploying your application. Let your product owner tweak access policies on their own while you focus on more interesting work. Increase visibility The traditional practice of weaving authorization logic into application code effectively obscures the logic and complicates the source code. Documentation is notoriously difficult to keep up-to-date as the system evolves — inevitably requiring a code spelunking session to answer questions or update the documentation. This is often tedious, error-prone and requires valuable developer time. The simple policy-as-configuration approach provided by Cerbos helps even non-developers easily understand the authorization logic of the system. Best of all, it is always guaranteed to be up-to-date. Don’t repeat yourself In modern microservice environments it is quite common to share some resources between different services developed by different teams (e.g. a bank account in a banking system). These services could even be developed using different programming languages. Cerbos provides a language-agnostic API to share common access control policies between these disparate services — ensuring instant consistency without the need to coordinate development and deployment efforts across many teams. Use proven techniques Cerbos provides advanced tooling to lint, compile and test policies. Native GitOps support is built in. Use the same development best practices you use day-to-day to develop and deploy authorization logic. Comprehensive audit trails The textual policy language of Cerbos makes it ideal for storing policies on version control systems. Follow the evolution of access rules through time and pinpoint exactly when changes were made, why, and by whom. Cerbos Policy Decision Point (PDP) is built for modern, containerised microservice environments with support for both x86-64 and ARM64 architectures, comprehensive observability integrations (metrics, distributed tracing), REST and gRPC endpoints, and native GitOps support (CI tooling, push-to-deploy). ## [](#%5Fcerbos%5Fworkflow)Cerbos workflow * Author Cerbos policies to define access rules for your resources. Optionally, write unit tests for the policies using the Cerbos DSL. * Compile the policies and run tests using the Cerbos CLI. * Follow your standard development process to push the changes to production. (E.g. create pull request, run CI tests, get approval and merge to prod branch) * Cerbos will automatically pull the latest commits from the production branch and update the policies in place without requiring a restart. Your changes are now rolled out! ## [](#%5Fauthorization%5Fas%5Fa%5Fservice)Authorization as a Service Cerbos is designed to be deployed as a service rather than a library compiled into an application. This design choice provides several benefits: * Permission checks can be performed by any part of the application stack and even shared between multiple services regardless of the programming language, CPU architecture, operating system or deployment model. * Policy updates instantly take effect without having to recompile or redeploy the applications. This reduces disruption to busy services and enables policy authors to iterate quickly and respond to events faster. * With modern network stacks, the communication overhead is [effectively negligible](https://www.miketheman.net/2021/12/28/container-to-container-communication/) in all but the most extreme cases. Even in those exceptional cases, scaling Cerbos to handle the demand is extremely easy due to its lightweight, stateless design. * All development and optimization efforts to Cerbos can be concentrated on a single project because we do not need to replicate the effort on multiple language-specific implementations. All our users, regardless of their programming language of choice, immediately get the benefit of the latest and greatest Cerbos features as soon they are released. The Cerbos approach is a proven, modern, cloud native pattern for delivering language-agnostic infrastructure services. [Microsoft Dapr](https://dapr.io), [Istio](https://istio.io) and [Linkerd](https://linkerd.io) are good examples of popular products utilising similar language-agnostic service APIs to augment applications. Because Cerbos is in the critical request path and expected to handle large volumes of requests, we are obsessive about making Cerbos as fast and as efficient as possible with every release. Cerbos exposes an efficient, low latency gRPC API and is designed to be stateless and lightweight so that it can be deployed as a sidecar right next to your application. It can even be accessed over Unix domain sockets for extra security and reduced overhead. Quickstart ==================== Create a directory to store the policies. ```sh mkdir -p cerbos-quickstart/policies ``` Now start the Cerbos server. We are using the container image in this guide but you can follow along using the binary as well. See [installation instructions](installation/binary.html) for more information. ```shell docker run --rm --name cerbos -d -v $(pwd)/cerbos-quickstart/policies:/policies -p 3592:3592 -p 3593:3593 ghcr.io/cerbos/cerbos:0.45.1 ``` Time to try out a simple request. | | If you prefer to use [Postman](https://www.postman.com), [Insomnia](https://insomnia.rest) or any other software that supports OpenAPI, you can follow this guide along on those tools by downloading the OpenAPI definitions from . You can also use the built-in API browser by pointing your browser to . | | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | * cURL * .NET * Go * Java * JS * PHP * Python * Ruby * Rust ```shell cat < {(effect == Effect.Allow ? "EFFECT_ALLOW" : "EFFECT_DENY")}\n"); } } } } ``` ```go package main import ( "context" "log" "github.com/cerbos/cerbos-sdk-go/cerbos" ) func main() { c, err := cerbos.New("localhost:3593", cerbos.WithPlaintext()) if err != nil { log.Fatalf("Failed to create client: %v", err) } principal := cerbos.NewPrincipal("bugs_bunny", "user") principal.WithAttr("beta_tester", true) kind := "album:object" actions := []string{"view:public", "comment"} r1 := cerbos.NewResource(kind, "BUGS001") r1.WithAttributes(map[string]any{ "owner": "bugs_bunny", "public": false, "flagged": false, }) r2 := cerbos.NewResource(kind, "DAFFY002") r2.WithAttributes(map[string]any{ "owner": "daffy_duck", "public": true, "flagged": false, }) batch := cerbos.NewResourceBatch() batch.Add(r1, actions...) batch.Add(r2, actions...) resp, err := c.CheckResources(context.Background(), principal, batch) if err != nil { log.Fatalf("Failed to check resources: %v", err) } log.Printf("%v", resp) } ``` ```java package demo; import static dev.cerbos.sdk.builders.AttributeValue.boolValue; import static dev.cerbos.sdk.builders.AttributeValue.stringValue; import java.util.Map; import dev.cerbos.sdk.CerbosBlockingClient; import dev.cerbos.sdk.CerbosClientBuilder; import dev.cerbos.sdk.CheckResult; import dev.cerbos.sdk.builders.Principal; import dev.cerbos.sdk.builders.ResourceAction; public class App { public static void main(String[] args) throws CerbosClientBuilder.InvalidClientConfigurationException { CerbosBlockingClient client=new CerbosClientBuilder("localhost:3593").withPlaintext().buildBlockingClient(); for (String n : new String[]{"BUGS001", "DAFFY002"}) { CheckResult cr = client.batch( Principal.newInstance("bugs_bunny", "user") .withAttribute("beta_tester", boolValue(true)) ) .addResources( ResourceAction.newInstance("album:object","BUGS001") .withAttributes( Map.of( "owner", stringValue("bugs_bunny"), "public", boolValue(false), "flagged", boolValue(false) ) ) .withActions("view:public", "comment"), ResourceAction.newInstance("album:object","DAFFY002") .withAttributes( Map.of( "owner", stringValue("daffy_duck"), "public", boolValue(true), "flagged", boolValue(false) ) ) .withActions("view:public", "comment") ) .check().find(n).orElse(null); if (cr != null) { System.out.printf("\nResource: %s\n", n); cr.getAll().forEach((action, allowed) -> { System.out.printf("\t%s -> %s\n", action, allowed ? "EFFECT_ALLOW" : "EFFECT_DENY"); }); } } } } ``` ```javascript const { GRPC: Cerbos } = require("@cerbos/grpc"); const cerbos = new Cerbos("localhost:3593", { tls: false }); (async() => { const kind = "album:object"; const actions = ["view:public", "comment"]; const cerbosPayload = { principal: { id: "bugs_bunny", roles: ["user"], attributes: { beta_tester: true, }, }, resources: [ { resource: { kind: kind, id: "BUGS001", attributes: { owner: "bugs_bunny", public: false, flagged: false, }, }, actions: actions, }, { resource: { kind: kind, id: "DAFFY002", attributes: { owner: "daffy_duck", public: true, flagged: false, }, }, actions: actions, }, ], }; const decision = await cerbos.checkResources(cerbosPayload); console.log(decision.results) })(); ``` ```php withPlaintext(true) ->build(); $request = CheckResourcesRequest::newInstance() ->withRequestId(RequestId::generate()) ->withPrincipal( Principal::newInstance("bugs_bunny") ->withRole("user") ->withAttribute("beta_tester", AttributeValue::boolValue(true)) ) ->withResourceEntries( [ ResourceEntry::newInstance("album:object", "BUGS001") ->withAttribute("owner", AttributeValue::stringValue("bugs_bunny")) ->withAttribute("public", AttributeValue::boolValue(false)) ->withAttribute("flagged", AttributeValue::boolValue(false)) ->withActions(["comment", "view:public"]), ResourceEntry::newInstance("album:object", "DAFFY002") ->withAttribute("owner", AttributeValue::stringValue("daffy_duck")) ->withAttribute("public", AttributeValue::boolValue(true)) ->withAttribute("flagged", AttributeValue::boolValue(false)) ->withActions(["comment", "view:public"]) ] ); $checkResourcesResponse = $client->checkResources($request); foreach (["BUGS001", "DAFFY002"] as $resourceId) { $resultEntry = $checkResourcesResponse->find($resourceId); $actions = $resultEntry->getActions(); foreach ($actions as $k => $v) { printf("%s -> %s", $k, Effect::name($v)); } } ?> ``` ```python import json from cerbos.sdk.client import CerbosClient from cerbos.sdk.model import Principal, Resource, ResourceAction, ResourceList from fastapi import HTTPException, status principal = Principal( "bugs_bunny", roles=["user"], attr={ "beta_tester": True, }, ) actions = ["view:public", "comment"] resource_list = ResourceList( resources=[ ResourceAction( Resource( "BUGS001", "album:object", attr={ "owner": "bugs_bunny", "public": False, "flagged": False, }, ), actions=actions, ), ResourceAction( Resource( "DAFFY002", "album:object", attr={ "owner": "daffy_duck", "public": True, "flagged": False, }, ), actions=actions, ), ], ) with CerbosClient(host="http://localhost:3592") as c: try: resp = c.check_resources(principal=principal, resources=resource_list) resp.raise_if_failed() except Exception: raise HTTPException( status_code=status.HTTP_403_FORBIDDEN, detail="Unauthorized" ) print(json.dumps(resp.to_dict(), sort_keys=False, indent=4)) ``` ```ruby require 'cerbos' require 'json' client = Cerbos::Client.new("localhost:3593", tls: false) kind = "album:object" actions = ["view:public", "comment"] r1 = { :kind => kind, :id => "BUGS001", :attributes => { :owner => "bugs_bunny", :public => false, :flagged => false, } } r2 = { :kind => kind, :id => "DAFFY002", :attributes => { :owner => "daffy_duck", :public => true, :flagged => false, } } decision = client.check_resources( principal: { id: "bugs_bunny", roles: ["user"], attributes: { beta_tester: true, }, }, resources: [ { resource: r1, actions: actions }, { resource: r2, actions: actions }, ], ) res = { :results => [ { :resource => r1, :actions => { :comment => decision.allow?(resource: r1, action: "comment"), :"view:public" => decision.allow?(resource: r1, action: "view:public"), }, }, { :resource => r2, :actions => { :comment => decision.allow?(resource: r2, action: "comment"), :"view:public" => decision.allow?(resource: r2, action: "view:public"), }, }, ], } puts JSON.pretty_generate(res) ``` ```rust use cerbos::sdk::attr::attr; use cerbos::sdk::model::{Principal, Resource, ResourceAction, ResourceList}; use cerbos::sdk::{CerbosAsyncClient, CerbosClientOptions, CerbosEndpoint, Result}; #[tokio::main] async fn main() -> Result<()> { let opt = CerbosClientOptions::new(CerbosEndpoint::HostPort("localhost", 3593)).with_plaintext(); let mut client = CerbosAsyncClient::new(opt).await?; let principal = Principal::new("bugs_bunny", ["user"]).with_attributes([attr("beta_tester", true)]); let actions: [&str; 2] = ["view:public", "comment"]; let resp = client .check_resources( principal, ResourceList::new_from([ ResourceAction( Resource::new("BUGS001", "album:object").with_attributes([ attr("owner", "bugs_bunny"), attr("public", false), attr("flagged", false), ]), actions, ), ResourceAction( Resource::new("DAFFY002", "album:object").with_attributes([ attr("owner", "daffy_duck"), attr("public", true), attr("flagged", false), ]), actions, ), ]), None, ) .await?; println!("{:?}", resp.response); Ok(()) } ``` In this example, the `bugs_bunny` principal is trying to perform two actions (`view:public` and `comment`) on two `album:object` resources. The resource instance with the ID `BUGS001` belongs to `bugs_bunny` and is private (`public` attribute is `false`). The other resource instance with the ID `DAFFY002` belongs to `daffy_duck` and is public. This is the response from the server: Response ```json { "requestId": "quickstart", "results": [ { "resource": { "id": "BUGS001", "kind": "album:object" }, "actions": { "comment": "EFFECT_DENY", "view:public": "EFFECT_DENY" } }, { "resource": { "id": "DAFFY002", "kind": "album:object" }, "actions": { "comment": "EFFECT_DENY", "view:public": "EFFECT_DENY" } } ] } ``` Bugs Bunny is not allowed to view or comment on any of the album resources — even the ones that belong to him. This is because currently there are no policies defined for the `album:object` resource. Now create a [derived roles](policies/derived%5Froles.html) definition that assigns the `owner` dynamic role to a user if the `owner` attribute of the resource they’re trying to access is equal to their ID. ```sh cat > cerbos-quickstart/policies/derived_roles_common.yaml < cerbos-quickstart/policies/resource_album.yaml < cerbos-quickstart/policies/resource_album.yaml <..v/`. A resource policy for `leave_request` with version `default` and scope `acme.hr` would therefore have the ID `resource.leave_request.vdefault/acme.hr`. curl -k -u cerbos:cerbosAdmin -X POST \ 'https://localhost:3592/admin/policy/disable?id=principal.donald_duck.vdefault&id=derived_roles.my_derived_roles' Response ```json { "disabledPolicies": 2 (1) } ``` | **1** | Number of policies disabled | | ----- | --------------------------- | ### [](#enable-policies)Enable Policies POST /admin/policy/enable?id=policy_id PUT /admin/policy/enable?id=policy_id | | This endpoint requires a mutable storage driver such as [sqlite3](../configuration/storage.html#sqlite3) to be configured. | | ----------------------------------------------------------------------------------------------------------------------------- | Issue a POST request to the endpoint with the list of IDs (the `id` query parameter can be repeated multiple times) to enable. The ID is of the form `..v/`. A resource policy for `leave_request` with version `default` and scope `acme.hr` would therefore have the ID `resource.leave_request.vdefault/acme.hr`. curl -k -u cerbos:cerbosAdmin -X POST \ 'https://localhost:3592/admin/policy/enable?id=principal.donald_duck.vdefault&id=derived_roles.my_derived_roles' Response ```json { "enabledPolicies": 2 (1) } ``` | **1** | Number of policies enabled | | ----- | -------------------------- | ## [](#%5Fschema%5Fmanagement)Schema Management ### [](#%5Faddupdate%5Fschemas)Add/update schemas POST /admin/schema PUT /admin/schema | | This endpoint requires a mutable storage driver such as [sqlite3](../configuration/storage.html#sqlite3) to be configured. | | ----------------------------------------------------------------------------------------------------------------------------- | Request ```json { "schemas": [ (1) { "id": "principal.json", "definition": "ewogICIkc2NoZW1hIjogImh0dHBzOi8vanNvbi1zY2hlbWEub3JnL2RyYWZ0LzIwMjAtMTIvc2NoZW1hIiwKICAidHlwZSI6ICJvYmplY3QiLAogICJwcm9wZXJ0aWVzIjogewogICAgImRlcGFydG1lbnQiOiB7CiAgICAgICJ0eXBlIjogInN0cmluZyIsCiAgICAgICJlbnVtIjogWwogICAgICAgICJtYXJrZXRpbmciLAogICAgICAgICJlbmdpbmVlcmluZyIKICAgICAgXQogICAgfSwKICAgICJnZW9ncmFwaHkiOiB7CiAgICAgICJ0eXBlIjogInN0cmluZyIKICAgIH0sCiAgICAidGVhbSI6IHsKICAgICAgInR5cGUiOiAic3RyaW5nIgogICAgfSwKICAgICJtYW5hZ2VkX2dlb2dyYXBoaWVzIjogewogICAgICAidHlwZSI6ICJzdHJpbmciCiAgICB9LAogICAgIm9yZ0lkIjogewogICAgICAidHlwZSI6ICJzdHJpbmciCiAgICB9LAogICAgImpvYlJvbGVzIjogewogICAgICAidHlwZSI6ICJhcnJheSIsCiAgICAgICJpdGVtcyI6IHsKICAgICAgICAgICJ0eXBlIjogInN0cmluZyIKICAgICAgfQogICAgfSwKICAgICJ0YWdzIjogewogICAgICAidHlwZSI6ICJvYmplY3QiLAogICAgICAicHJvcGVydGllcyI6IHsKICAgICAgICAiYnJhbmRzIjogewogICAgICAgICAgInR5cGUiOiAiYXJyYXkiLAogICAgICAgICAgIml0ZW1zIjogewogICAgICAgICAgICAgICJ0eXBlIjogInN0cmluZyIKICAgICAgICAgIH0KICAgICAgICB9LAogICAgICAgICJjbGFzc2VzIjogewogICAgICAgICAgInR5cGUiOiAiYXJyYXkiLAogICAgICAgICAgIml0ZW1zIjogewogICAgICAgICAgICAgICJ0eXBlIjogInN0cmluZyIKICAgICAgICAgIH0KICAgICAgICB9LAogICAgICAgICJyZWdpb25zIjogewogICAgICAgICAgInR5cGUiOiAiYXJyYXkiLAogICAgICAgICAgIml0ZW1zIjogewogICAgICAgICAgICAgICJ0eXBlIjogInN0cmluZyIKICAgICAgICAgIH0KICAgICAgICB9CiAgICAgIH0KICAgIH0KICB9LAogICJyZXF1aXJlZCI6IFsKICAgICJkZXBhcnRtZW50IiwKICAgICJnZW9ncmFwaHkiLAogICAgInRlYW0iCiAgXQp9Cg==" (2) } ] } ``` | **1** | List of schema definitions | | ----- | --------------------------------------------------------------- | | **2** | base64 encoded [JSON schema](http://json-schema.org) definition | Response ```json {} ``` ### [](#%5Flist%5Fschemas)List schemas GET /admin/schemas Issue a GET request to the endpoint to list the schemas available in the store. | | Only the schema IDs will be returned from this request. Use the GetSchema endpoint to retrieve the full definition of a schema. | | ---------------------------------------------------------------------------------------------------------------------------------- | ```shell curl -k -u cerbos:cerbosAdmin \ 'https://localhost:3592/admin/schemas' ``` Response ```json { "schemaIds": [ (1) "principal.json", "leave_request.json" ] } ``` | **1** | List of schema ids | | ----- | ------------------ | ### [](#%5Fget%5Fschemas)Get schema(s) GET /admin/schema Issue a GET request to the endpoint to get the schema(s) stated in the query parameters. ```shell curl -k -u cerbos:cerbosAdmin \ 'https://localhost:3592/admin/schema?id=principal.json&id=leave_request.json' ``` Response ```json { "schemas": [ (1) { "id": "principal.json", "definition": "ewogICIkc2NoZW1hIjogImh0dHBzOi8vanNvbi1zY2hlbWEub3JnL2RyYWZ0LzIwMjAtMTIvc2NoZW1hIiwKICAidHlwZSI6ICJvYmplY3QiLAogICJwcm9wZXJ0aWVzIjogewogICAgImRlcGFydG1lbnQiOiB7CiAgICAgICJ0eXBlIjogInN0cmluZyIsCiAgICAgICJlbnVtIjogWwogICAgICAgICJtYXJrZXRpbmciLAogICAgICAgICJlbmdpbmVlcmluZyIKICAgICAgXQogICAgfSwKICAgICJnZW9ncmFwaHkiOiB7CiAgICAgICJ0eXBlIjogInN0cmluZyIKICAgIH0sCiAgICAidGVhbSI6IHsKICAgICAgInR5cGUiOiAic3RyaW5nIgogICAgfSwKICAgICJtYW5hZ2VkX2dlb2dyYXBoaWVzIjogewogICAgICAidHlwZSI6ICJzdHJpbmciCiAgICB9LAogICAgIm9yZ0lkIjogewogICAgICAidHlwZSI6ICJzdHJpbmciCiAgICB9LAogICAgImpvYlJvbGVzIjogewogICAgICAidHlwZSI6ICJhcnJheSIsCiAgICAgICJpdGVtcyI6IHsKICAgICAgICAgICJ0eXBlIjogInN0cmluZyIKICAgICAgfQogICAgfSwKICAgICJ0YWdzIjogewogICAgICAidHlwZSI6ICJvYmplY3QiLAogICAgICAicHJvcGVydGllcyI6IHsKICAgICAgICAiYnJhbmRzIjogewogICAgICAgICAgInR5cGUiOiAiYXJyYXkiLAogICAgICAgICAgIml0ZW1zIjogewogICAgICAgICAgICAgICJ0eXBlIjogInN0cmluZyIKICAgICAgICAgIH0KICAgICAgICB9LAogICAgICAgICJjbGFzc2VzIjogewogICAgICAgICAgInR5cGUiOiAiYXJyYXkiLAogICAgICAgICAgIml0ZW1zIjogewogICAgICAgICAgICAgICJ0eXBlIjogInN0cmluZyIKICAgICAgICAgIH0KICAgICAgICB9LAogICAgICAgICJyZWdpb25zIjogewogICAgICAgICAgInR5cGUiOiAiYXJyYXkiLAogICAgICAgICAgIml0ZW1zIjogewogICAgICAgICAgICAgICJ0eXBlIjogInN0cmluZyIKICAgICAgICAgIH0KICAgICAgICB9CiAgICAgIH0KICAgIH0KICB9LAogICJyZXF1aXJlZCI6IFsKICAgICJkZXBhcnRtZW50IiwKICAgICJnZW9ncmFwaHkiLAogICAgInRlYW0iCiAgXQp9Cg==" }, { "id": "leave_request.json", "definition": "ewogICIkc2NoZW1hIjogImh0dHBzOi8vanNvbi1zY2hlbWEub3JnL2RyYWZ0LzIwMjAtMTIvc2NoZW1hIiwKICAidHlwZSI6ICJvYmplY3QiLAogICJwcm9wZXJ0aWVzIjogewogICAgImRlcGFydG1lbnQiOiB7CiAgICAgICJ0eXBlIjogInN0cmluZyIsCiAgICAgICJlbnVtIjogWwogICAgICAgICJtYXJrZXRpbmciLAogICAgICAgICJlbmdpbmVlcmluZyIKICAgICAgXQogICAgfSwKICAgICJnZW9ncmFwaHkiOiB7CiAgICAgICJ0eXBlIjogInN0cmluZyIKICAgIH0sCiAgICAidGVhbSI6IHsKICAgICAgInR5cGUiOiAic3RyaW5nIgogICAgfSwKICAgICJpZCI6IHsKICAgICAgInR5cGUiOiAic3RyaW5nIgogICAgfSwKICAgICJvd25lciI6IHsKICAgICAgInR5cGUiOiAic3RyaW5nIgogICAgfSwKICAgICJzdGF0dXMiOiB7CiAgICAgICJ0eXBlIjogInN0cmluZyIKICAgIH0sCiAgICAiZGV2X3JlY29yZCI6IHsKICAgICAgInR5cGUiOiAiYm9vbGVhbiIKICAgIH0KICB9LAogICJyZXF1aXJlZCI6IFsKICAgICJkZXBhcnRtZW50IiwKICAgICJnZW9ncmFwaHkiLAogICAgInRlYW0iLAogICAgImlkIgogIF0KfQo=" } ] } ``` | **1** | List of schemas | | ----- | --------------- | ### [](#%5Fdelete%5Fschemas)Delete schema(s) DELETE /admin/schema Issue a DELETE request to the endpoint to delete the schema(s) stated in the query parameters. ```shell curl -k -u cerbos:cerbosAdmin -X DELETE \ 'https://localhost:3592/admin/schema?id=principal.json&id=leave_request.json' ``` Response ```json { "deletedSchemas": 2 (1) } ``` | **1** | Number of schemas deleted | | ----- | ------------------------- | ## [](#store-management)Store Management ### [](#%5Freload%5Fstore)Reload store GET /admin/store/reload Issue a GET request to the endpoint to force a reload of the store. Reload the store ```shell curl -k -u cerbos:cerbosAdmin -X GET \ 'https://localhost:3592/admin/store/reload' ``` Reload the store and block until it finishes ```shell curl -k -u cerbos:cerbosAdmin -X GET \ 'https://localhost:3592/admin/store/reload?wait=true' ``` Response ```json {} ``` | | This endpoint requires a reloadable storage driver such as [blob](../configuration/storage.html#blob), [disk](../configuration/storage.html#disk) and [git](../configuration/storage.html#git) to be configured. | | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | The Cerbos API ==================== The main API endpoint for making policy decisions is the [/api/check/resources REST endpoint](#check-resources) (`cerbos.svc.v1.CerbosService/CheckResources` RPC in the gRPC API). You can browse a [static version of the Cerbos OpenAPI specification on this site](%5Fattachments/cerbos-api.html). To interactively explore the API, launch a Cerbos instance and access the root directory of the HTTP endpoint using a browser. ```sh docker run --rm --name cerbos -p 3592:3592 -p 3593:3593 ghcr.io/cerbos/cerbos:0.45.1 ``` Navigate to using your browser to explore the Cerbos API documentation. Alternatively, you can explore the API using the following methods as well: * Using an OpenAPI-compatible software like [Postman](https://www.postman.com) or [Insomnia](https://insomnia.rest) to explore the Cerbos OpenAPI spec available at . * Using [grpcurl](https://github.com/fullstorydev/grpcurl) or any other tool that supports [gRPC server reflection](https://github.com/grpc/grpc/blob/master/doc/server-reflection.md) API to explore the gRPC API exposed on port 3593. ## [](#%5Fclient%5Fsdks)Client SDKs * [![Go](_images/go.svg)](https://pkg.go.dev/github.com/cerbos/cerbos-sdk-go/cerbos)[ Go](https://pkg.go.dev/github.com/cerbos/cerbos-sdk-go/cerbos) * [![Java](_images/java.svg)](https://github.com/cerbos/cerbos-sdk-java)[ Java](https://github.com/cerbos/cerbos-sdk-java) * [![JavaScript](_images/javascript.svg)](https://github.com/cerbos/cerbos-sdk-javascript)[ JavaScript](https://github.com/cerbos/cerbos-sdk-javascript) * [![.NET](_images/dot-net.svg)](https://github.com/cerbos/cerbos-sdk-net)[ .NET](https://github.com/cerbos/cerbos-sdk-net) * [![Laravel](_images/laravel.svg)](https://github.com/cerbos/cerbos-sdk-laravel)[ Laravel](https://github.com/cerbos/cerbos-sdk-laravel) * [![PHP](_images/php.svg)](https://github.com/cerbos/cerbos-sdk-php)[ PHP](https://github.com/cerbos/cerbos-sdk-php) * [![Python](_images/python.svg)](https://github.com/cerbos/cerbos-sdk-python)[ Python](https://github.com/cerbos/cerbos-sdk-python) * [![Ruby](_images/ruby.svg)](https://github.com/cerbos/cerbos-sdk-ruby)[ Ruby](https://github.com/cerbos/cerbos-sdk-ruby) * [![Rust](_images/rust.svg)](https://github.com/cerbos/cerbos-sdk-rust)[ Rust](https://github.com/cerbos/cerbos-sdk-rust) Other languages coming soon ## [](#%5Fdemos)Demos | | Demos are constantly being added or updated by the Cerbos team. Visit for the latest list. | | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | * [Application (Python)](https://github.com/cerbos/demo-python) * [GraphQL Service (NodeJS)](https://github.com/cerbos/demo-graphql) * [REST Service (Go)](https://github.com/cerbos/demo-rest) Get help * [Join the Cerbos community on Slack](http://go.cerbos.io/slack) * [Email us at ](mailto:help@cerbos.dev)[help@cerbos.dev](mailto:help@cerbos.dev) ## [](#%5Frequest%5Fand%5Fresponse%5Fformats)Request and response formats ### [](#check-resources)`CheckResources` (`/api/check/resources`) This is the main API entrypoint for checking permissions for a set of resources. Request ```json { "requestId": "test", (1) "principal": { "id": "alice", (2) "policyVersion": "20210210", (3) "scope": "acme.corp", (4) "roles": [ (5) "employee" ], "attr": { (6) "department": "accounting", "geography": "GB", "team": "design" } }, "resources": [ (7) { "resource": { "id": "XX125", (8) "kind": "leave_request", (9) "policyVersion": "20210210", (10) "scope": "acme.corp", (11) "attr": { (12) "department": "accounting", "geography": "GB", "id": "XX125", "owner": "john", "team": "design" } }, "actions": [ (13) "view:public", "approve", "create" ] } ], "auxData": { (14) "jwt": { "token": "xxx.yyy.zzz", (15) "keySetId": "ks1" (16) } }, "includeMeta": true (17) } ``` | **1** | Request ID is an optional, application-provided identifier useful for correlating logs. | | ------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | **2** | ID of the principal whose permissions are being checked. This usually comes from the identity provider (IdP). | | **3** | Principal policy version. Optional. The server falls back to the [configured default version](../configuration/engine.html) if this is not specified. | | **4** | Principal policy scope. Optional. See [Scoped policies](../policies/scoped%5Fpolicies.html). | | **5** | The roles attached to this principal by the identity provider. | | **6** | Free-form context data about this principal. Policy rule conditions are evaluated based on these values. | | **7** | List of resources the principal is attempting to access. Up to 50 resources may be provided in a single request by default. This [limit can be configured](../configuration/server.html#request-limits). | | **8** | ID of the resource. | | **9** | Resource kind. This is used to determine the resource policy that applies to this resource. | | **10** | Resource policy version. Optional. The server falls back to the [configured default version](../configuration/engine.html) if this is not specified. | | **11** | Resource policy scope. Optional. See [Scoped policies](../policies/scoped%5Fpolicies.html). | | **12** | Free-form context data about this resource. Policy rule conditions are evaluated based on these values. | | **13** | List of actions being performed on the resource. Up to 50 actions per resource may be provided by default. This [limit can be configured](../configuration/server.html#request-limits). | | **14** | Optional section for providing auxiliary data. | | **15** | JWT to use as an auxiliary data source. | | **16** | ID of the keyset to use to verify the JWT. Optional if only a single [keyset is configured](../configuration/auxdata.html). | | **17** | Optional flag to receive metadata about request evaluation. | Response ```json { "requestId": "test", (1) "results": [ (2) { "resource": { (3) "id": "XX125", "kind": "leave_request", "policyVersion": "20210210", "scope": "acme.corp" }, "actions": { (4) "view:public": "EFFECT_ALLOW", "approve": "EFFECT_DENY" }, "outputs": [ (5) { "src": "resource.leave_request.v20210210/acme#rule-001", (6) "val": "create_allowed:john" (7) }, { "src": "resource.leave_request.v20210210#public-view", "val": { "id": "john", "keys": ["foo", "bar", "baz"] } } ], "validationErrors": [ (8) { "path": "/department", "message": "value must be one of \"marketing\", \"engineering\"", "source": "SOURCE_PRINCIPAL" }, { "path": "/department", "message": "value must be one of \"marketing\", \"engineering\"", "source": "SOURCE_RESOURCE" } ], "meta": { (9) "actions": { "view:public": { "matchedPolicy": "resource.leave_request.v20210210/acme.corp", (10) "matchedScope": "acme" (11) }, "approve": { "matchedPolicy": "resource.leave_request.v20210210/acme.corp" } }, "effectiveDerivedRoles": [ (12) "employee_that_owns_the_record", "any_employee" ] } } ], "cerbosCallId": "01HHENANTHFD5DV3HZGDKB87PJ" (13) } ``` | **1** | Request ID that was sent with the request. | | ------ | ----------------------------------------------------------------------------------------------------------------------------- | | **2** | List of results. Items are in the same order as they were sent in the request. | | **3** | Resource identifiers. | | **4** | Access decisions for each of the actions. | | **5** | List of outputs from policy evaluation, if there are any. See [Outputs](../policies/outputs.html). | | **6** | Name of the rule that produced the output. | | **7** | Output value produced by the rule. | | **8** | Validation errors, if [schema enforcement](../policies/schemas.html) is enabled and the request didn’t conform to the schema. | | **9** | Metadata (if includeMeta was true in the request) | | **10** | Name of the policy that produced the decision for this action. | | **11** | Name of the scope that was active when the decision was made for the action. | | **12** | List of derived roles that were activated. | | **13** | The call ID generated by Cerbos and stored in the audit log for this request. | ### [](#resources-query-plan)`PlanResources` (`/api/plan/resources`) Produces a query plan that can be used to obtain a list of resources that a principal is allowed to perform a particular action on. Request ```json { "requestId": "test01", (1) "action": "approve", (2) "actions": ["approve", "view"], (3) "resource": { "policyVersion": "dev", (4) "kind": "leave_request", (5) "scope": "acme.corp", (6) "attr": { (7) "owner": "alicia" } }, "principal": { "id": "alicia", (8) "policyVersion": "dev", (9) "scope": "acme.corp", (10) "roles": ["user"], (11) "attr": { (12) "geography": "GB" } }, "includeMeta": true, (13) "auxData": { (14) "jwt": { "token": "xxx.yyy.zzz", (15) "keySetId": "ks-1" (16) } } } ``` | **1** | Request ID can be anything that uniquely identifies a request. | | ------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | **2** | Action being performed on the resource instances. Either <2> or <3> is required. | | **3** | Actions being performed on the resource instances. The query plan is the logical AND of individual query plans for each action. Either <2> or <3> is required. | | **4** | Resource policy version. Optional. The server falls back to the [configured default version](../configuration/engine.html) if this is not specified. | | **5** | Resource kind. Required. This value is used to determine the resource policy to evaluate. | | **6** | Resource scope. Optional. See [Scoped policies](../policies/scoped%5Fpolicies.html). | | **7** | Free-form context data about the resources under consideration. The object holds all attributes known about the resource at the time the request. Optional. Policy rule conditions will be (partially) evaluated based on these values. If an effective policy rule condition(s) requires a resource attribute not present in this object, then the response will contain the condition(s) abstract syntax tree. | | **8** | ID of the principal performing the actions. Required. | | **9** | Principal policy version. Optional. The server falls back to the [configured default version](../configuration/engine.html) if this is not specified. | | **10** | Principal scope. Optional. See [Scoped policies](../policies/scoped%5Fpolicies.html). | | **11** | Static roles that are assigned to this principal by your identity management system. Required. | | **12** | Free-form context data about this principal. Policy rule conditions are evaluated based on these values. | | **13** | An optional flag to signal that the response should include metadata about evaluation. Useful for debugging. | | **14** | Optional section for providing auxiliary data. | | **15** | JWT to use as an auxiliary data source. | | **16** | ID of the keyset to use to verify the JWT. Optional if only a single [keyset is configured](../configuration/auxdata.html). | Response ```json { "requestId": "test01", "action": "approve", "resourceKind": "leave_request", "policyVersion": "dev", "filter": { "kind": "KIND_CONDITIONAL", (1) "condition": { (2) "expression": { "operator": "eq", "operands": [ { "variable": "request.resource.attr.status" }, { "value": "PENDING_APPROVAL" } ] } } }, "meta": { "filterDebug": "(request.resource.attr.status == \"PENDING_APPROVAL\")" (3) }, "cerbosCallId": "01HHENANTHFD5DV3HZGDKB87PJ" (4) } ``` | **1** | Filter kind can be KIND\_ALWAYS\_ALLOWED, KIND\_ALWAYS\_DENIED or KIND\_CONDITIONAL. See below for description of what these values mean. | | ----- | ------------------------------------------------------------------------------------------------------------------------------------------------- | | **2** | Populated only if kind is KIND\_CONDITIONAL. Contains the abstract syntax tree (AST) of the condition that must be satisfied to allow the action. | | **3** | Condition AST represented as a human readable string. Useful for debugging. | | **4** | The call ID generated by Cerbos and stored in the audit log for this request. | #### [](#%5Fstructure%5Fof%5Fthe%5Ffilter%5Fblock)Structure of the `filter` block The `kind` field defines the filter kind. `KIND_ALWAYS_ALLOWED` The principal is unconditionally allowed to perform the action `KIND_ALWAYS_DENIED` The principal is unconditionally not permitted to perfrom the action `KIND_CONDITIONAL` The principal is allowed to perform the action if the condition is satisfied The `condition` field holds the AST of the condition that must be satisfied. It is rooted in an expression that has an `operator` (e.g. equals, greater than) and `operands` (e.g. a constant value, a variable or another expression). __Common Operators__ | Operator | Description | | -------- | -------------------------- | | add | Addition (+) | | and | Logical AND (&&) | | div | Division (/) | | eq | Equality (==) | | ge | Greater than or equal (>=) | | gt | Greater than (>) | | in | List membership (in) | | index | Array or map index | | lambda | Anonymous function | | le | Less than or equal (⇐) | | list | List constructor | | lt | Less than (<) | | mod | Modulo (%) | | mult | Multiplication (\*) | | ne | Not equal (!=) | | not | Logical NOT | | or | Logical OR | | sub | Subtract (-) | Example: `request.resource.attr.status == "PENDING_APPROVAL"` ```json { "expression": { "operator": "eq", "operands": [ { "variable": "request.resource.attr.status" }, { "value": "PENDING_APPROVAL" } ] } } ``` Example: `(request.resource.attr.department == "marketing") && (request.resource.attr.team != "design")` ```json { "expression": { "operator": "and", "operands": [ { "expression": { "operator": "eq", "operands": [ { "variable": "request.resource.attr.department" }, { "value": "marketing" } ] } }, { "expression": { "operator": "ne", "operands": [ { "variable": "request.resource.attr.team" }, { "value": "design" } ] } } ] } } ``` Example: `request.resource.attr.values.filter(t, t > 0)` ```json { "expression": { "operator": "filter", "operands": [ { "variable": "request.resource.attr.values" }, { "expression": { "operator": "lambda", "operands": [ { "variable": "t" }, { "expression": { "operator": "gt", "operands": [ { "variable": "t" }, { "value": 0 } ] } } ] } } ] } } ``` ### [](#server-info)`ServerInfo` (`/api/server_info`) Returns Cerbos server version. Response ```json { "version": "0.25.0", "commit": "6b5a051a160398a3c04370f742e6090fab2ed0b8", "buildDate": "2023-02-13T09:31:48Z" } ``` ## [](#%5Faccessing%5Fthe%5Fapi)Accessing the API ### [](#%5Fusing%5Fcurl%5Fto%5Faccess%5Fthe%5Frest%5Fapi)Using curl to access the REST API ```sh cat <. Cerbos gRPC API definitions are published to the [Buf schema registry (BSR)](https://buf.build/cerbos/cerbos-api) and can be easily added to your project if you use the [Buf build system for protobufs](https://docs.buf.build). ### [](#%5Frest)REST There are many tools available to generate clients from an OpenAPI specification. is a good resource for finding a tool suitable for your preferred language. #### [](#%5Fexample%5Fgenerating%5Fa%5Fjava%5Fclient%5Fusing%5Fopenapi%5Fgenerator)Example: Generating a Java client using OpenAPI Generator | | [OpenAPI Generator](https://openapi-generator.tech) has [support for many popular programming languages and frameworks](https://openapi-generator.tech/docs/generators#client-generators). Please consult the documentation to find the client generation instructions for your favourite language. | | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | This is an example of using the popular [OpenAPI Generator](https://openapi-generator.tech) service to generate a Java client API. * Download the Cerbos OpenAPI specification ```sh curl -Lo swagger.json http://localhost:3592/schema/swagger.json ``` * Run the OpenAPI Generator ```sh docker run --rm -v $(pwd):/oas openapitools/openapi-generator-cli generate -i /oas/swagger.json -g java -o /oas/java ``` ### [](#%5Fgrpc)gRPC **Any language** You can access the Cerbos protobuf definitions from the [Cerbos source tree](https://github.com/cerbos/cerbos/tree/main/api). However, the easiest way to generate client code for your preferred language is to use the [Buf build tool](https://docs.buf.build) to obtain the published API definitions from the [Buf schema registry (BSR)](https://buf.build/cerbos/cerbos-api). * Run `buf export buf.build/cerbos/cerbos-api -o proto` to download the API definitions with dependencies to the `proto` directory. * You can now use [buf generate](https://docs.buf.build/generate-usage) or `protoc` to generate code using the protobufs available in the `proto` directory. | | [BSR generated SDKs](https://buf.build/cerbos/cerbos-api/sdks) feature can be used to download pre-packaged, generated code for supported languages. | | ------------------------------------------------------------------------------------------------------------------------------------------------------- | **Go** The [Cerbos Go SDK](https://pkg.go.dev/github.com/cerbos/cerbos/client) uses the gRPC API to communicate with Cerbos. The generated gRPC and protobuf code is available under the `github.com/cerbos/cerbos/api/genpb` package. ```sh go get github.com/cerbos/cerbos/api/genpb ``` You can also make use [Buf generated SDKs](https://buf.build/cerbos/cerbos-api) to pull down the Cerbos gRPC API as a Go module: ```sh go get buf.build/gen/go/cerbos/cerbos-api/grpc/go@latest ``` API reference ==================== [API reference](%5Fattachments/cerbos-api.html) cerbos ==================== See [Install from binary](../installation/binary.html) or [Run from container](../installation/container.html) for instructions on how to install the `cerbos` binary. This binary provides the following sub commands: `compile` Validate, compile and run tests on a policy repo `healthcheck` Perform a healthcheck on a Cerbos PDP `repl` An interactive REPL (read-evaluate-print-loop) for CEL conditions `run` Start a PDP and run a command within its context `server` Start the PDP server Example: Running `compile` using the binary ```sh ./cerbos compile --help ``` Example: Running `compile` using the container ```sh docker run -i -t ghcr.io/cerbos/cerbos:0.45.1 compile --help ``` ## [](#compile)`compile` Command Runs the Cerbos compiler to validate policy definitions and run any test suites. See [Policy compilation](../policies/compile.html) for more information. This command exits with different exit codes depending on the kind of failure. | 0 | No compile or test failures | | - | ---------------------------- | | 1 | Unknown failure | | 2 | Invalid arguments to command | | 3 | Compilation failed | | 4 | Tests failed | ```none Usage: cerbos compile Compile and test policies Examples: # Compile and run tests found in /path/to/policy/repo cerbos compile /path/to/policy/repo # Compile and run tests that contain "Delete" in their name cerbos compile --run=Delete /path/to/policy/repo # Compile but skip tests cerbos compile --skip-tests /path/to/policy/repo Arguments: Policy directory Flags: -h, --help Show context-sensitive help. --version Show cerbos version --ignore-schemas Ignore schemas during compilation --tests=STRING Path to the directory containing tests. Defaults to policy directory. --run=STRING Run only tests that match this regex --skip-tests Skip tests -o, --output="tree" Output format (tree,list,json) --test-output=TEST-OUTPUT Test output format. If unspecified matches the value of the output flag. (tree,list,json,junit) --color=COLOR Output color level (auto,never,always,256,16m). Defaults to auto. --no-color Disable colored output --verbose Verbose output on test failure ``` ## [](#healthcheck)`healthcheck` Command Utility to perform healthchecks on a Cerbos PDP. Can be used as a [Docker HEALTHCHECK](https://docs.docker.com/engine/reference/builder/#healthcheck) command. You can share the configuration between Cerbos PDP and the healthcheck by using the `CERBOS_CONFIG` environment variable to define the path to the config file. Example: Docker healthcheck based on mounted config file ```sh docker run -i -t -p 3592:3592 -p 3593:3593 \ -v /path/to/conf/dir:/config \ -e CERBOS_CONFIG=/config/.cerbos.yaml \ ghcr.io/cerbos/cerbos:0.45.1 ``` ```none Usage: cerbos healthcheck (hc) Healthcheck utility Performs a healthcheck on a Cerbos PDP. This can be used as a Docker HEALTHCHECK command. When the path to the Cerbos config file is provided via the '--config' flag or the CERBOS_CONFIG environment variable, the healthcheck will be automatically configured based on the settings from the file. By default, the gRPC endpoint will be checked using the gRPC healthcheck protocol. This is usually sufficient for most cases as the Cerbos REST API is built on top of the gRPC API as well. Examples: # Check gRPC endpoint cerbos healthcheck --config=/path/to/.cerbos.yaml # Check HTTP endpoint and ignore server certificate verification cerbos healthcheck --config=/path/to/.cerbos.yaml --kind=http --insecure # Check the HTTP endpoint of a specific host with no TLS. cerbos healthcheck --kind=http --host-port=10.0.1.5:3592 --no-tls Flags: -h, --help Show context-sensitive help. --version Show cerbos version --kind="grpc" Healthcheck kind (grpc,http) ($CERBOS_HC_KIND) --insecure Do not verify server certificate ($CERBOS_HC_INSECURE) --timeout=10s Healthcheck timeout ($CERBOS_HC_TIMEOUT) config --config=STRING Cerbos config file ($CERBOS_CONFIG) manual --host-port=STRING Host and port to connect to ($CERBOS_HC_HOSTPORT) --ca-cert=STRING Path to CA cert for validating server cert ($CERBOS_HC_CACERT) --no-tls Don't use TLS ($CERBOS_HC_NOTLS) ``` ## [](#repl)`repl` Command The REPL is an interactive utility to experiment with [CEL conditions](../policies/conditions.html) used in Cerbos policy rules. All Cerbos library functions and special variables (`request`, `R`, `P` and so on) are available in this environment. Example: Running the REPL using the binary ```sh ./cerbos repl ``` Example: Running the REPL using the container ```sh docker run -i -t ghcr.io/cerbos/cerbos:0.45.1 repl ``` You can type in valid CEL expressions at the prompt to instantly evaluate them. -> 5 + 1 _ = 6 -> "test".charAt(1) _ = "e" The special variable `_` holds the result of the last expression evaluated. -> 5 + 5 _ = 10 -> _ * 10 _ = 100 You can define variables using the `:let` directive. -> :let x = hierarchy("a.b.c") x = [ "a", "b", "c" ] -> :let y = hierarchy("a.b") y = [ "a", "b" ] -> x.immediateChildOf(y) _ = true You can also set special variables used in Cerbos policies (`request`, `variables`, `R`, `P`, `V`) and try out CEL expressions using them. -> :let request = { > "principal":{"id":"john","roles":["employee"],"attr":{"scope":"foo.bar.baz.qux"}}, > "resource":{"id":"x1","kind":"leave_request","attr":{"scope":"foo.bar"}} > } -> hierarchy(R.attr.scope).ancestorOf(hierarchy(P.attr.scope)) _ = true Type `:vars` to display the values of all the variables currently defined in the environment. You can load a Cerbos policy into the REPL by typing `:load path/to/policy_file.yaml`. This will read the policy and load any rules that have conditions attached. These can be viewed by typing `:rules`. Execute any rule by providing its number to the `:exec` directive (for example, `:exec #2`). Rules are executed using the variables defined in the current REPL session. You can set or update them using the `:let` directive and re-execute the rules to see the effects. -> :load store/leave_request.yaml Loaded resource.leave_request.v20210210 Policy variables: { "pending_approval": "(\"PENDING_APPROVAL\")", "principal_location": "(P.attr.ip_address.inIPAddrRange(\"10.20.0.0/16\") ? \"GB\" : \"\")" } Conditional rules in 'resource.leave_request.v20210210' [#0] actions: - approve condition: match: expr: request.resource.attr.status == V.pending_approval derivedRoles: - direct_manager effect: EFFECT_ALLOW -> :exec #0 └──request.resource.attr.status == V.pending_approval [false] Type `:help` or `:h` to display help. Type `:quit`, `:q` or `:exit` to exit the REPL. | | Use the up/down arrow keys (or Ctrl+P/Ctrl+N) to navigate command history. Most of the standard line-editing commands such as Ctrl+a, Ctrl+h, Ctrl+r are supported as well. | | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ## [](#run)`run` Command This provides a quick way to try out Cerbos. It launches a Cerbos PDP instance and then invokes a command of your choice that can then use the PDP to make access decisions. A good use case for this command is as an integration test runner. If you have written some tests that make use of Cerbos, you can run them within the context of an actual PDP instance as follows: ```sh cerbos run -- python -m unittest ``` By default, the policies are loaded from the `policies` directory in the current working directory and HTTP and gRPC endpoints will be exposed on `127.0.0.1:3592` and `127.0.0.1:3593` respectively. Your application can obtain the actual endpoint addresses by inspecting the `CERBOS_HTTP` or `CERBOS_GRPC` environment variables. If a file named `.cerbos.yaml` exists in the current working directory, that file will be used as the Cerbos [configuration file](../configuration/index.html). You can use a different config file or override specific config values using the same flags as the `server` command above. ```none Usage: cerbos run ... Run a command in the context of a Cerbos PDP Launches a command within the context of a Cerbos PDP. The policies are loaded by default from a directory named "policies" in the current working directory. The launched application can access Cerbos endpoints using the values from CERBOS_HTTP or CERBOS_GRPC environment variables. If a file named ".cerbos.yaml" exists in the current working directory, it will be used as the configuration file for the PDP. You can override the config file and/or other configuration options using the flags described below. Examples: # Launch Go tests within a Cerbos context cerbos run -- go test ./... # Start Cerbos with a custom configuration file and run Python tests within the context cerbos run --config=myconf.yaml -- python -m unittest # Silence Cerbos log output cerbos run --log-level=error -- curl -I http://127.0.0.1:3592/_cerbos/health Arguments: ... Command to run Flags: -h, --help Show context-sensitive help. --version Show cerbos version --log-level="info" Log level (debug,info,warn,error) --config=.cerbos.yaml Path to config file --set=server.adminAPI.enabled=true,... Config overrides --timeout=30s Cerbos startup timeout ``` ## [](#server)`server` Command Starts the Cerbos PDP. ```none Usage: cerbos server --config=.cerbos.yaml Start Cerbos server (PDP) Examples: # Start the server cerbos server --config=/path/to/.cerbos.yaml # Start the server with the Admin API enabled and the 'sqlite' storage driver cerbos server --config=/path/to/.cerbos.yaml --set=server.adminAPI.enabled=true --set=storage.driver=sqlite3 --set=storage.sqlite3.dsn=':memory:' Flags: -h, --help Show context-sensitive help. --version Show cerbos version --debug-listen-addr=:6666 Address to start the gops listener --log-level="info" Log level (debug,info,warn,error) --config=.cerbos.yaml Path to config file --set=server.adminAPI.enabled=true,... Config overrides ``` cerbosctl ==================== This utility can be downloaded as a separate container, tar archive, or [npm package](https://www.npmjs.com/package/cerbosctl). It is automatically installed when installing Cerbos through [Linux packages or the Homebrew tap](../installation/binary.html#linux-packages). Run from the container ```sh docker run -it ghcr.io/cerbos/cerbosctl:0.45.1 \ --server=192.168.1.10:3593 \ --username=user \ --password=password \ get rp ``` __Download and run the appropriate binary from __ | OS | Arch | Bundle | | ----- | --------- | ----------------------------------------- | | Linux | x86-64 | cerbosctl\_0.45.1\_Linux\_x86\_64.tar.gz | | Linux | arm64 | cerbosctl\_0.45.1\_Linux\_arm64.tar.gz | | MacOS | universal | cerbosctl\_0.45.1\_Darwin\_all.tar.gz | | MacOS | x86-64 | cerbosctl\_0.45.1\_Darwin\_x86\_64.tar.gz | | MacOS | arm64 | cerbosctl\_0.45.1\_Darwin\_arm64.tar.gz | Cerbosctl requires the [Admin API to be enabled](../configuration/server.html#admin-api) on the Cerbos server. The server address to connect to and the credentials to authenticate can be provided through environment variables or as arguments to the command. ```none Usage: cerbosctl A CLI for managing Cerbos The Cerbos Admin API must be enabled in order for these commands to work. The Admin API requires credentials. They can be provided using a netrc file, environment variables or command-line arguments. Environment variables - CERBOS_SERVER: gRPC address of the Cerbos server - CERBOS_USERNAME: Admin username - CERBOS_PASSWORD: Admin password When more than one method is used to provide credentials, the precedence from lowest to highest is: netrc < environment < command line. Examples # Connect to a TLS enabled server while skipping certificate verification and launch the decisions viewer cerbosctl --server=localhost:3593 --username=user --password=password --insecure decisions # Connect to a non-TLS server and launch the decisions viewer cerbosctl --server=localhost:3593 --username=user --password=password --plaintext decisions Flags: -h, --help Show context-sensitive help. --server="localhost:3593" Address of the Cerbos server ($CERBOS_SERVER) --username=STRING Admin username ($CERBOS_USERNAME) --password=STRING Admin password ($CERBOS_PASSWORD) --ca-cert=STRING Path to the CA certificate for verifying server identity --client-cert=STRING Path to the TLS client certificate --client-key=STRING Path to the TLS client key --insecure Skip validating server certificate --plaintext Use plaintext protocol without TLS Commands: get derived_roles (derived_role,dr) [ ...] get export_constants (ec) [ ...] get export_variables (ev) [ ...] get principal_policies (principal_policy,pp) [ ...] get resource_policies (resource_policy,rp) [ ...] get schemas (schema,s) [ ...] store export (e) store reload (r) delete schema (schemas,s) ... disable policy (policies,p) ... enable policy (policies,p) ... put policy (policies,p) ... put schema (schemas,s) ... decisions Interactive decision log viewer audit View audit logs version Show cerbosctl and PDP version Run "cerbosctl --help" for more information on a command. ``` ## [](#audit)`audit` This command allows you to view the audit logs captured by the Cerbos server. [Audit logging](../configuration/audit.html) must be enabled on the server to obtain the data through this command. Filters tail Get the last N records (e.g. `--tail=10`) between Get records between two ISO-8601 timestamps. If the last timestamp is left out, get records from the first timestamp up to now. * `--between=2021-07-01T00:00:00Z,2021-07-02T00:00:00Z`: From midnight of 2021-07-01 to midnight of 2021-07-02. * `--between=2021-07-01T00:00:00Z`: From midnight of 2021-07-01 to now. since Get records from N hours/minutes/second ago to now. (e.g. `--since=3h`) lookup Get a specific record by ID. (e.g. `--lookup=01F9Y5MFYTX7Y87A30CTJ2FB0S`) View the last 10 access logs ```sh cerbosctl audit --kind=access --tail=10 ``` View the decision logs from midnight 2021-07-01 to midnight 2021-07-02 ```sh cerbosctl audit --kind=decision --between=2021-07-01T00:00:00Z,2021-07-02T00:00:00Z ``` View the decision logs from midnight 2021-07-01 to now ```sh cerbosctl audit --kind=decision --between=2021-07-01T00:00:00Z ``` View the access logs from 3 hours ago to now as newline-delimited JSON ```sh cerbosctl audit --kind=access --since=3h --raw ``` View a specific access log entry by call ID ```sh cerbosctl audit --kind=access --lookup=01F9Y5MFYTX7Y87A30CTJ2FB0S ``` ## [](#decisions)`decisions` This command starts an interactive text user interface to view and analyze the decision records captured by the Cerbos server. It accepts the same [filter flags](#audit-filters) as the `audit` command. ![Decisions](_images/decisions-tui.png) * tab Switch focus to different panes in the UI * esc Close window (or exit if you are in the main screen) * q Exit Use the arrow keys (or Vim keys h, j, k, l) to scroll horizontally or vertically. Press enter to select/open an item. Start analyzing the last 20 decision records ```sh cerbosctl decisions --tail=20 ``` ## [](#delete)`delete` This command deletes the schemas with the specified ids. Delete schemas cerbosctl delete schemas principal.json cerbosctl delete schema principal.json cerbosctl delete s principal.json Delete multiple schemas cerbosctl delete schemas principal.json leave_request.json cerbosctl delete schema principal.json leave_request.json cerbosctl delete s principal.json leave_request.json ## [](#disable)`disable` This command disables the policies with the specified ids. Disable policies cerbosctl disable policies derived_roles.my_derived_roles cerbosctl disable policy derived_roles.my_derived_roles cerbosctl disable p derived_roles.my_derived_roles Disable multiple policies cerbosctl disable policies derived_roles.my_derived_roles resource.leave_request.default cerbosctl disable policy derived_roles.my_derived_roles resource.leave_request.default cerbosctl disable p derived_roles.my_derived_roles resource.leave_request.default | | Scoped policies must have unbroken scope chains. If you’re disabling a scoped policy, make sure that its descendant policies are disabled as well. | | ----------------------------------------------------------------------------------------------------------------------------------------------------- | ## [](#enable)`enable` This command enables the policies with the specified ids. Enable policies cerbosctl enable policies derived_roles.my_derived_roles cerbosctl enable policy derived_roles.my_derived_roles cerbosctl enable p derived_roles.my_derived_roles Enable multiple policies cerbosctl enable policies derived_roles.my_derived_roles resource.leave_request.default cerbosctl enable policy derived_roles.my_derived_roles resource.leave_request.default cerbosctl enable p derived_roles.my_derived_roles resource.leave_request.default ## [](#get)`get` This command lists the policies available in the configured policy repository. You can also retrieve individual policies or schemas by their identifiers and view their definitions as YAML or JSON. You can filter the output using the `name` and `version` flags. Each flag accepts multiple comma-separated values which are OR’ed together. For example, `--name=a.yaml,b.yaml` matches policies that are either named `a.yaml` or `b.yaml`. Separately, you can filter the output using the `name-regexp`, `version-regexp` and `scope-regexp` flags. Each flag accepts a regular expression string. These are separate from the `name` and `version` flags above, and cannot be used with their respective counterparts. You can include disabled policies in the results by adding `--include-disabled` flag. You can optionally restrict the list to a specific set of policies by supplying their IDs as trailing arguments. List derived roles cerbosctl get derived_roles cerbosctl get derived_role cerbosctl get dr List principal policies cerbosctl get principal_policies cerbosctl get principal_policy cerbosctl get pp List resource policies cerbosctl get resource_policies cerbosctl get resource_policy cerbosctl get rp List derived\_roles where `name` is `my_policy` or `a_policy` cerbosctl get derived_roles --name my_policy,a_policy cerbosctl get dr --name my_policy,a_policy List derived\_roles where `name` is `my_policy` or `a_policy`, using regular expression cerbosctl get derived_roles --name-regexp "^(my|a)_policy\$" cerbosctl get dr --name-regexp "^(my|a)_policy\$" Get derived roles policies having the ID `common_roles.yaml` and `other_roles.yaml` cerbosctl get derived_roles common_roles.yaml other_roles.yaml cerbosctl get dr common_roles.yaml other_roles.yaml List principal\_policies where `version` is `default` or `v1` cerbosctl get principal_policies --version default,v1 cerbosctl get pp --version default,v1 List principal\_policies where `version` is `default` or `v1`, using regular expression cerbosctl get principal_policies --version-regexp "(default|v1)" cerbosctl get pp --version-regexp "(default|v1)" Get principal\_policies having the ID `alex.yaml` and `john.yaml` cerbosctl get principal_policies alex.yaml john.yaml cerbosctl get pp alex.yaml john.yaml List resource\_policies where `scope` includes the substring `foo`, using regular expression cerbosctl get resource_policies --scope-regexp foo cerbosctl get rp --scope-regexp foo Get resource\_policies having the ID `leave_request.yaml` and `purchase_order.yaml` cerbosctl get resource_policies leave_request.yaml purchase_order.yaml cerbosctl get rp leave_request.yaml purchase_order.yaml List derived\_roles and sort by column `policyId` or `name` cerbosctl get derived_roles --sort-by policyId cerbosctl get dr --sort-by policyId cerbosctl get derived_roles --sort-by name cerbosctl get dr --sort-by name List principal\_policies and sort by column `policyId`, `name` or `version` cerbosctl get principal_policies --sort-by policyId cerbosctl get pp --sort-by policyId cerbosctl get principal_policies --sort-by name cerbosctl get pp --sort-by name cerbosctl get principal_policies --sort-by version cerbosctl get pp --sort-by version List resource\_policies and sort by column `policyId`, `name` or `version` cerbosctl get resource_policies --sort-by policyId cerbosctl get rp --sort-by policyId cerbosctl get resource_policies --sort-by name cerbosctl get rp --sort-by name cerbosctl get resource_policies --sort-by version cerbosctl get rp --sort-by version Get JSON cerbosctl get derived_roles my_derived_roles --output=json Get YAML cerbosctl get derived_roles my_derived_roles --output=yaml ## [](#hub)`hub` Operations related to Cerbos Hub. ### [](#epdp)`epdp` Operations related to embedded PDPs. #### [](#list-candidates)`list-candidates` This command lists policies that are candidates for inclusion in the ePDP bundle. A policy is marked for inclusion if it is annotated with `hub.cerbos.cloud/embedded-pdp: "true"` in the `metadata.annotations` section of the policy. If a policy has the correct annotation, that policy and its ancestors (if it’s a scoped policy) are included the Cerbos Hub embedded PDP bundle. If none of the policies in the repo are annotated, they are all included in the bundle by default. List candidates cerbosctl hub epdp list-candidates ./path/to/repository ## [](#inspect-policies)`inspect policies` This command is to inspect policies in the store. Currently, it lists actions and variables defined in the policies. Inspect policies cerbosctl inspect policies ## [](#put)`put` This command puts the given policies or schemas to the configured policy repository. Put policies cerbosctl put policies ./path/to/policy.yaml cerbosctl put policy ./path/to/policy.yaml cerbosctl put p ./path/to/policy.yaml Put multiple policies cerbosctl put policy ./path/to/policy.yaml ./path/to/other/policy.yaml Put policies under a directory cerbosctl put policy ./dir/to/policies ./other/dir/to/policies Put policies under a directory recursively cerbosctl put policy --recursive ./dir/to/policies cerbosctl put policy -R ./dir/to/policies Put policies from a zip file cerbosctl put policy ./dir/to/policies.zip Put schemas cerbosctl put schemas ./path/to/schema.json cerbosctl put schema ./path/to/schema.json cerbosctl put s ./path/to/schema.json Put multiple schemas cerbosctl put schema ./path/to/schema.json ./path/to/other/schema.json Put schemas under a directory cerbosctl put schema ./dir/to/schemas ./other/dir/to/schemas Put schemas under a directory recursively cerbosctl put schema --recursive ./dir/to/schemas cerbosctl put schema -R ./dir/to/schemas Put schemas from a zip file cerbosctl put schema ./dir/to/schemas.zip ## [](#store)`store` Trigger operations on the policy store of the PDP ### [](#export)`export` Exports the policies and schemas from the store into a directory. Export policies and schemas from the store into a directory cerbosctl store export path/to/dir Export policies and schemas from the store into a zip archive cerbosctl store export path/to/archive.zip Export policies and schemas from the store into a gzip archive cerbosctl store export path/to/archive.gzip cerbosctl store export path/to/archive.tar.gz ### [](#reload)`reload` Reloads the store. Reload the store cerbosctl store reload Reload the store and wait until it finishes cerbosctl store reload --wait` Cerbos CLI ==================== Every [Cerbos release](https://github.com/cerbos/cerbos/releases/tag/v0.45.1) ships with two binaries: [cerbos](cerbos.html) The Cerbos server (PDP) and the compiler/test runner [cerbosctl](cerbosctl.html) Command line utility to interact with Cerbos PDP instances that have the [Admin API enabled](../configuration/server.html#admin-api) Audit block ==================== The `audit` block configures the audit logging settings for the Cerbos instance. Audit logs capture access records and decisions made by the engine along with the associated context data. Cerbos API responses include a `cerbosCallId` field that contains the unique identifier under which the request was logged to the audit log (if enabled) and the Cerbos activity log. It is recommended that applications log this ID as part of their activity logs too so that those log entries can be joined together with Cerbos logs during log analysis to build a complete picture of the authorization decisions. | | Audit logging has some overhead in terms of resource usage (disk IO, CPU and memory). This overhead is usually negligible unless Cerbos is running in a resource-constrained environment. If resources are scarce or if you are expecting heavy traffic, disabling audit logging might have a positive impact on performance. | | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ```yaml audit: accessLogsEnabled: false # AccessLogsEnabled defines whether access logging is enabled. backend: local # Backend states which backend to use for Audits. decisionLogFilters: # DecisionLogFilters define the filters to apply while producing decision logs. checkResources: # CheckResources defines the filters that apply to CheckResources calls. ignoreAllowAll: false # IgnoreAllowAll ignores responses that don't contain an EFFECT_DENY. planResources: # PlanResources defines the filters that apply to PlanResources calls. ignoreAll: false # IgnoreAll prevents any plan responses from being logged. Takes precedence over other filters. ignoreAlwaysAllow: false # IgnoreAlwaysAllow ignores ALWAYS_ALLOWED plans. decisionLogsEnabled: false # DecisionLogsEnabled defines whether logging of policy decisions is enabled. enabled: false # Enabled defines whether audit logging is enabled. excludeMetadataKeys: ['authorization'] # ExcludeMetadataKeys defines which gRPC request metadata keys should be excluded from the audit logs. Takes precedence over includeMetadataKeys. includeMetadataKeys: ['content-type'] # IncludeMetadataKeys defines which gRPC request metadata keys should be included in the audit logs. file: additionalPaths: [stdout] # AdditionalPaths to mirror the log output. Has performance implications. Use with caution. logRotation: # LogRotation settings (optional). maxFileAgeDays: 10 # MaxFileAgeDays sets the maximum age in days of old log files before they are deleted. maxFileCount: 10 # MaxFileCount sets the maximum number of files to retain. maxFileSizeMB: 100 # MaxFileSizeMB sets the maximum size of individual log files in megabytes. path: /path/to/file.log # Required. Path to the log file to use as output. The special values stdout and stderr can be used to write to stdout or stderr respectively. hub: advanced: bufferSize: 256 flushInterval: 1s gcInterval: 60s maxBatchSize: 32 mask: # Mask defines a list of attributes to exclude from the audit logs, specified as lists of JSONPaths checkResources: - inputs[*].principal.attr.foo - inputs[*].auxData - outputs metadata: ['authorization'] peer: - address - forwarded_for planResources: ['input.principal.attr.nestedMap.foo'] retentionPeriod: 168h # How long to keep records for storagePath: /path/to/dir # Path to store the data kafka: ack: all # Ack mode for producing messages. Valid values are "none", "leader" or "all" (default). Idempotency is disabled when mode is not "all". authentication: # Authentication tls: caPath: /path/to/ca.crt # Required. CAPath is the path to the CA certificate. certPath: /path/to/tls.cert # CertPath is the path to the client certificate. insecureSkipVerify: true # InsecureSkipVerify controls whether the server's certificate chain and host name are verified. Default is false. keyPath: /path/to/tls.key # KeyPath is the path to the client key. reloadInterval: 5m # ReloadInterval is the interval at which the TLS certificates are reloaded. The default is 0 (no reload). brokers: ['localhost:9092'] # Required. Brokers list to seed the Kafka client. clientID: cerbos # ClientID reported in Kafka connections. closeTimeout: 30s # CloseTimeout sets how long when closing the client to wait for any remaining messages to be flushed. compression: ['snappy'] # Compression sets the compression algorithm to use in order of priority. Valid values are "none", "gzip", "snappy","lz4", "zstd". Default is ["snappy", "none"]. encoding: json # Encoding format. Valid values are "json" (default) or "protobuf". maxBufferedRecords: 1000 # MaxBufferedRecords sets the maximum number of records the client should buffer in memory in async mode. produceSync: false # ProduceSync forces the client to produce messages to Kafka synchronously. This can have a significant impact on performance. topic: cerbos.audit.log # Required. Topic to write audit entries to. local: advanced: bufferSize: 256 flushInterval: 1s gcInterval: 60s maxBatchSize: 32 retentionPeriod: 168h # How long to keep records for storagePath: /path/to/dir # Path to store the data ``` Including or excluding request metadata in log entries To tune how request metadata (headers) is logged to access and decision log entries, configure `includeMetadataKeys` and `excludeMetadataKeys` as follows: * Both `includeMetadataKeys` and `excludeMetadataKeys` are empty: no metadata will be logged * Only `includeMetadataKeys` is defined: only the metadata keys in the list will be logged * Only `excludeMetadataKeys` is defined: everything except the keys defined in the list will be logged * Both `includeMetadataKeys` and `excludeMetadataKeys` are defined: Only the keys in the include list will be logged if, and only if, they are not in the exclude list | | If requests contain sensitive data such as authorization tokens, they will be captured by the audit logs and visible to anyone with access to the log files. Cerbos automatically excludes the authorization header. However, if you use other header keys to store sensitive data, always exclude them using the excludeMetadataKeys configuration setting. | | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ## [](#file)File backend The `file` backend writes audit records as newline-delimited JSON to a file or stdout/stderr. With this backend you can use your existing log aggregation system (Datadog agent, Elastic agent, Fluentd, Graylog — to name a few) to collect, process and archive the audit data from all Cerbos instances. | | This backend cannot be queried using the Admin API, cerbosctl audit or cerbosctl decisions. | | ---------------------------------------------------------------------------------------------- | Minimal configuration with file output and no log rotation ```yaml audit: enabled: true accessLogsEnabled: true decisionLogsEnabled: true backend: file file: path: /path/to/audit.log ``` Configuration with log rotation and output to both stdout and a file ```yaml audit: enabled: true accessLogsEnabled: true decisionLogsEnabled: true backend: file file: path: /path/to/file.log additionalPaths: - stdout logRotation: maxFileAgeDays: 10 # Maximum age in days of old log files before they are deleted. maxFileCount: 10 # Maximum number of old log files to retain. maxFileSizeMB: 100 # Maximum size of individual log files in megabytes. ``` The `path` field can be set to special names `stdout` or `stderr` to log to stdout or stderr. Note that this would result in audit logs being mixed up with normal Cerbos operational logs. It is recommended to use an actual file for audit log output if your container orchestrator has support for collecting logs from files in addition to stdout/stderr. Audit log entries can be selected by setting a filter on `log.logger == "cerbos.audit"`. Access log entries have `log.kind == "access"` and decision log entries have `log.kind == "decision"`. If log rotation is enabled, `maxFileSizeMB` is the only required setting. If `maxFileCount` and `maxFileAgeDays` settings are not defined, files are never deleted by the Cerbos process. ## [](#hub)Hub backend | | Requires a [Cerbos Hub](https://www.cerbos.dev/product-cerbos-hub) account. [![Try Cerbos Hub](../_images/try_cerbos_hub.png)](https://hub.cerbos.cloud) | | ----------------------------------------------------------------------------------------------------------------------------------------------------------- | Securely sends audit logs to Cerbos Hub for aggregation and analysis. This vastly simplifies the work that would otherwise be required to configure and deploy a log aggregation solution to securely collect, store and query audit logs from across your fleet. If you are new to Cerbos Hub, follow the [getting started guide](../../../cerbos-hub/getting-started.html). For more information about configuring the PDP to send audit logs to Cerbos Hub, refer to the [audit log collection documentation](../../../cerbos-hub/audit-log-collection.html). ## [](#kafka)Kafka backend The `kafka` backend writes audit records to a Kafka topic. By default, the messages are published asynchronously to the specified topic in JSON format. The message header named `cerbos.audit.kind` would have the value `access` for access log entries and `decision` for decision log entries. You can configure the audit logger to produce data in the Protocol Buffers binary encoding format as well. The schema for messages is available at . Minimal configuration ```yaml audit: enabled: true accessLogsEnabled: true decisionLogsEnabled: true backend: kafka kafka: brokers: ['broker1.kafka:9092', 'broker2.kafka:9092'] topic: cerbos.audit.log ``` Full configuration ```yaml audit: enabled: true accessLogsEnabled: true decisionLogsEnabled: true backend: kafka kafka: ack: all # Ack mode for producing messages. Valid values are "none", "leader" or "all" (default). Idempotency is disabled when mode is not "all". authentication: # Authentication tls: caPath: /path/to/ca.crt # Required. CAPath is the path to the CA certificate. certPath: /path/to/tls.cert # CertPath is the path to the client certificate. insecureSkipVerify: true # InsecureSkipVerify controls whether the server's certificate chain and host name are verified. Default is false. keyPath: /path/to/tls.key # KeyPath is the path to the client key. reloadInterval: 5m # ReloadInterval is the interval at which the TLS certificates are reloaded. The default is 0 (no reload). brokers: ['localhost:9092'] # Required. Brokers list to seed the Kafka client. clientID: cerbos # ClientID reported in Kafka connections. closeTimeout: 30s # CloseTimeout sets how long when closing the client to wait for any remaining messages to be flushed. encoding: json # Encoding format. Valid values are "json" (default) or "protobuf". maxBufferedRecords: 1000 # MaxBufferedRecords sets the maximum number of records the client should buffer in memory in async mode. produceSync: false # ProduceSync forces the client to produce messages to Kafka synchronously. This can have a significant impact on performance. topic: cerbos.audit.log # Required. Topic to write audit entries to. compression: ['snappy'] # Compression sets the compression algorithm to use in order of priority. Valid values are "none", "gzip", "snappy","lz4", "zstd". Default is ["snappy", "none"]. ``` ## [](#local)Local backend The `local` backend uses an embedded key-value store to save audit records. Records are preserved for seven days by default and can be queried using the [Admin API](../api/admin%5Fapi.html), the [cerbosctl audit](../cli/cerbosctl.html#audit) command or the [cerbosctl decisions](../cli/cerbosctl.html#decisions) text interface (TUI). The only required setting for the `local` backend is the `storagePath` field which specifies the path on disk where the logs should be stored. ```yaml audit: enabled: true accessLogsEnabled: true decisionLogsEnabled: true backend: local local: storagePath: /path/to/dir retentionPeriod: 168h advanced: bufferSize: 16 # Size of the memory buffer. Increasing this will use more memory and the chances of losing data during a crash. maxBatchSize: 16 # Write batch size. If your records are small, increasing this will reduce disk IO. flushInterval: 30s # Time to keep records in memory before committing. gcInterval: 15m # How often the garbage collector runs to remove old entries from the log. ``` Engine block ==================== ## [](#default%5Fpolicy%5Fversion)Default policy version [Cerbos policies](../policies/index.html) have a `version` field to support use cases such as having different policies for different environments (production, staging etc.) or for gradual rollout of a new version of an application. By default, when a request does not explicitly specify the policy version, the Cerbos engine attempts to find a matching policy that has its version set to `default`. You can change this fallback value by setting the `defaultPolicyVersion`. For example, if you have a Cerbos deployment for your staging environment, you may want to set `defaultPolicyVersion: staging` to ensure that the default policies in effect are the ones versioned as `staging`. ```yaml engine: defaultPolicyVersion: "default" ``` ## [](#globals)Globals Global variables are a way to pass environment-specific information to [policy conditions](../policies/conditions.html). For example, you might want to grant additional permissions to a role in your staging environment, without creating separate policy versions for different environments. ```yaml engine: globals: environment: "staging" ``` Values set in `globals` can then be referenced in policy conditions: ```yaml rules: - actions: - view effect: EFFECT_ALLOW roles: - developer condition: match: expr: globals.environment != "production" ``` As with other configuration settings, environment variables can be used to set global values. ```yaml engine: globals: environment: ${CERBOS_ENVIRONMENT:development} ``` ## [](#lenient%5Fscopes)Lenient scope search When working with [scopes](../policies/scoped%5Fpolicies.html), the default behaviour of the Cerbos engine is to expect that a policy file exists for the requested scope. For example, if the API request defines `a.b.c` as the `scope`, a policy file _must exist_ in the policy repository with the `a.b.c` scope. This behaviour can be overridden by setting `lenientScopeSearch` configuration to `true`. When lenient scope search is enabled, if a policy with scope `a.b.c` does not exist in the store, Cerbos will attempt to find scopes `a.b`, `a` and \`\` in that order. | | This setting only affects how Cerbos treats missing leaf scopes when searching for policies. The policies stored in your policy store _must_ have unbroken scope chains (for example, if you have a scoped policy a.b.c in the store, the policy files for scopes a.b, a and \`\` must also exist). | | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ```yaml engine: lenientScopeSearch: true ``` AuxData block ==================== The `auxData` block configures the auxiliary data sources that can be referenced in policy conditions. ## [](#%5Fjwt)JWT Cerbos supports reading claims from a JWT issued by an authentication system. This helps reduce the boilerplate on the client side to extract the claims from a JWT and add them as attributes to the Cerbos API request. (See [The Cerbos API](../api/index.html) and [Auxiliary Data](../policies/conditions.html#auxdata) for more information on how to craft the API request and access the JWT claims in policies.) In order to verify the JWT, the Cerbos instance must have access to the appropriate keysets. They can be fetched from a URL or read from the local file system. Verification involves checking that the signature is valid and that the token has not expired. Using multiple keysets ```yaml auxData: jwt: keySets: - id: ks1 # Unique ID that can be used in API requests to indicate the keyset to use to verify a particular token. remote: # Fetch from a JWKS URL. url: https://domain.tld/.well-known/keys.jwks - id: ks2 remote: url: https://other-domain.tld/.well-known/keys.jwks refreshInterval: 1h # Explicitly set the refresh interval. - id: ks3 local: # Load from a local file file: /path/to/keys.jwks - id: ks4 local: # Load from a base64-encoded key data defined inline. data: BASE64-ENCODED-KEY-DATA - id: ks5 local: file: /path/to/keys.pem pem: true # Treat the file (or data) as PEM. ``` | | When multiple keysets are defined in the configuration file, all API requests _must_ include the keyset ID along with the JWT. When only a single keyset is defined in the configuration, then the keyset ID can be dropped from the API requests. | | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | When keysets are fetched from a `remote` source, if the `refreshInterval` is not defined in the configuration, Cerbos will respect the `Cache-Control` and `Expiry` headers returned from the remote source when determining the refresh interval. If none of these data points are available, then the default refresh interval is one hour. You can disable JWT verification by setting `disableVerification` to `true`. When verification is disabled, Cerbos will not perform cryptographic verification of the JWT but the `exp` and `nbf` claims are still checked to ensure that the token is valid. You can configure the acceptable time skew for those claims by setting `acceptableTimeSkew` to a positive time duration. | | Disabling JWT verification is not recommended because it makes the system insecure by forcing Cerbos to evaluate policies using potentially tampered data. Similarly, it’s not recommended to set acceptableTimeSkew to more than a few seconds. | | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ```yaml auxData: jwt: disableVerification: true acceptableTimeSkew: 2s ``` Cerbos maintains an in-memory cache of verified JWTs to avoid repeating the cryptographic verification step on each request. Cached tokens are still validated on each request to make sure they are still valid for use. You can increase the size of the cache by setting `cacheSize`. ```yaml auxData: jwt: cacheSize: 256 keySets: - id: default remote: url: https://domain.tld/.well-known/keys.jwks ``` Some legacy authentication systems have key sets that do not contain `alg` or `kid` fields. Not having these fields defined is a security risk and the default behaviour of Cerbos is to fail the parsing of JWT. If you are aware of the risks and still want to enable those tokens to be parsed, set the `optionalAlg` and `optionalKid` options. ```yaml auxData: jwt: keySets: - id: default remote: url: https://domain.tld/.well-known/keys.jwks insecure: optionalAlg: true # Set to true only if the keyset doesn't have an alg field optionalKid: true # Set to true only if the keyset doesn't have a kid field ``` Configuration ==================== The Cerbos server is configured with a YAML file, conventionally named `.cerbos.yaml`. Start the server by passing the configuration file using the `--config` flag. The values defined in the file can be overridden from the command-line by using the `--set` flag. The `--set` flag can be used multiple times. For example, to override `server.httpListenAddr` and `engine.defaultPolicyVersion`, the `--set` flag can be used as follows: ```sh ./cerbos server --config=/path/to/.cerbos.yaml --set=server.httpListenAddr=:3592 --set=engine.defaultPolicyVersion=staging ``` | | Config values can reference environment variables by enclosing them between ${}, for example ${HOME}. Defaults can be set using ${VAR:default}. | | -------------------------------------------------------------------------------------------------------------------------------------------------- | ## [](#minimal-configuration)Minimal Configuration At a minimum, Cerbos requires a storage driver to be configured. If no explicit configuration is provided using the `--config` flag, Cerbos defaults to a `disk` driver configured to look for policies in a directory named `policies` in the current working directory. Default configuration ```yaml --- server: httpListenAddr: ":3592" grpcListenAddr: ":3593" engine: defaultPolicyVersion: "default" auxData: jwt: disableVerification: true storage: driver: "disk" disk: directory: "${PWD}/policies" watchForChanges: true ``` ## [](#full-configuration)Full Configuration Cerbos has many configuration options that are either optional or has reasonable defaults built-in. The following section describes all user-configurable options and their defaults. Cerbos configuration file ```yaml --- audit: accessLogsEnabled: false # AccessLogsEnabled defines whether access logging is enabled. backend: local # Backend states which backend to use for Audits. decisionLogFilters: # DecisionLogFilters define the filters to apply while producing decision logs. checkResources: # CheckResources defines the filters that apply to CheckResources calls. ignoreAllowAll: false # IgnoreAllowAll ignores responses that don't contain an EFFECT_DENY. planResources: # PlanResources defines the filters that apply to PlanResources calls. ignoreAll: false # IgnoreAll prevents any plan responses from being logged. Takes precedence over other filters. ignoreAlwaysAllow: false # IgnoreAlwaysAllow ignores ALWAYS_ALLOWED plans. decisionLogsEnabled: false # DecisionLogsEnabled defines whether logging of policy decisions is enabled. enabled: false # Enabled defines whether audit logging is enabled. excludeMetadataKeys: ['authorization'] # ExcludeMetadataKeys defines which gRPC request metadata keys should be excluded from the audit logs. Takes precedence over includeMetadataKeys. includeMetadataKeys: ['content-type'] # IncludeMetadataKeys defines which gRPC request metadata keys should be included in the audit logs. file: additionalPaths: [stdout] # AdditionalPaths to mirror the log output. Has performance implications. Use with caution. logRotation: # LogRotation settings (optional). maxFileAgeDays: 10 # MaxFileAgeDays sets the maximum age in days of old log files before they are deleted. maxFileCount: 10 # MaxFileCount sets the maximum number of files to retain. maxFileSizeMB: 100 # MaxFileSizeMB sets the maximum size of individual log files in megabytes. path: /path/to/file.log # Required. Path to the log file to use as output. The special values stdout and stderr can be used to write to stdout or stderr respectively. hub: advanced: bufferSize: 256 flushInterval: 1s gcInterval: 60s maxBatchSize: 32 mask: # Mask defines a list of attributes to exclude from the audit logs, specified as lists of JSONPaths checkResources: - inputs[*].principal.attr.foo - inputs[*].auxData - outputs metadata: ['authorization'] peer: - address - forwarded_for planResources: ['input.principal.attr.nestedMap.foo'] retentionPeriod: 168h # How long to keep records for storagePath: /path/to/dir # Path to store the data kafka: ack: all # Ack mode for producing messages. Valid values are "none", "leader" or "all" (default). Idempotency is disabled when mode is not "all". authentication: # Authentication tls: caPath: /path/to/ca.crt # Required. CAPath is the path to the CA certificate. certPath: /path/to/tls.cert # CertPath is the path to the client certificate. insecureSkipVerify: true # InsecureSkipVerify controls whether the server's certificate chain and host name are verified. Default is false. keyPath: /path/to/tls.key # KeyPath is the path to the client key. reloadInterval: 5m # ReloadInterval is the interval at which the TLS certificates are reloaded. The default is 0 (no reload). brokers: ['localhost:9092'] # Required. Brokers list to seed the Kafka client. clientID: cerbos # ClientID reported in Kafka connections. closeTimeout: 30s # CloseTimeout sets how long when closing the client to wait for any remaining messages to be flushed. compression: ['snappy', 'none'] # Compression sets the compression algorithm to use in order of priority. Valid values are "none", "gzip", "snappy","lz4", "zstd". Default is ["snappy", "none"]. encoding: json # Encoding format. Valid values are "json" (default) or "protobuf". maxBufferedRecords: 1000 # MaxBufferedRecords sets the maximum number of records the client should buffer in memory in async mode. produceSync: false # ProduceSync forces the client to produce messages to Kafka synchronously. This can have a significant impact on performance. topic: cerbos.audit.log # Required. Topic to write audit entries to. local: advanced: bufferSize: 256 flushInterval: 1s gcInterval: 60s maxBatchSize: 32 retentionPeriod: 168h # How long to keep records for storagePath: /path/to/dir # Path to store the data auxData: jwt: # JWT holds the configuration for JWTs used as an auxiliary data source for the engine. acceptableTimeSkew: 2s # AcceptableTimeSkew sets the acceptable skew when checking exp and nbf claims. cacheSize: 256 # CacheSize sets the number of verified tokens cached in memory. Set to negative value to disable caching. disableVerification: false # DisableVerification disables JWT verification. keySets: # KeySets is the list of keysets to be used to verify tokens. - id: ks1 # Required. ID is the unique reference to this keyset. insecure: # Insecure options for relaxing security. Not recommended for production use. Use with caution. optionalAlg: false # OptionalAlg configures Cerbos to not require the alg field to be set in the key set. optionalKid: false # OptionalKid configures Cerbos to not require the kid field to be set in the key set. local: # Local defines a local keyset. Mutually exclusive with Remote. data: base64encodedJWK # Data is the encoded JWK data for this keyset. Mutually exclusive with File. file: /path/to/keys.jwk # File is the path to file containing JWK data. Mutually exclusive with Data. pem: true # PEM indicates that the data is PEM encoded. remote: # Remote defines a remote keyset. Mutually exclusive with Local. refreshInterval: 1h # RefreshInterval is the refresh interval for the keyset. url: https://domain.tld/.well-known/keys.jwks # Required. URL is the JWKS URL to fetch the keyset from. compile: cacheDuration: 60s # CacheDuration is the duration to cache an entry. cacheSize: 1024 # CacheSize is the number of compiled policies to cache in memory. engine: defaultPolicyVersion: "default" # DefaultPolicyVersion defines what version to assume if the request does not specify one. globals: {"environment": "staging"} # Globals are environment-specific variables to be made available to policy conditions. lenientScopeSearch: false # LenientScopeSearch configures the engine to ignore missing scopes and search upwards through the scope tree until it finds a usable policy. hub: credentials: # Credentials holds Cerbos Hub client credentials. clientID: 92B0K05B6HOF # ClientID of the Cerbos Hub credential. Defaults to the value of the CERBOS_HUB_CLIENT_ID environment variable. clientSecret: ${CERBOS_HUB_CLIENT_SECRET} # ClientSecret of the Cerbos Hub credential. Defaults to the value of the CERBOS_HUB_CLIENT_SECRET environment variable. pdpID: crb-004 # PDPID is the unique identifier for this Cerbos instance. Defaults to the value of the CERBOS_HUB_PDP_ID environment variable. workspaceSecret: ${CERBOS_HUB_WORKSPACE_SECRET} # WorkspaceSecret used to decrypt the bundles. Defaults to the value of the CERBOS_HUB_WORKSPACE_SECRET environment variable. schema: cacheSize: 1024 # CacheSize defines the number of schemas to cache in memory. enforcement: reject # Enforcement defines level of the validations. Possible values are none, warn, reject. server: apiExplorerEnabled: true # APIExplorerEnabled defines whether the API explorer UI is enabled. adminAPI: # AdminAPI defines the admin API configuration. adminCredentials: # AdminCredentials defines the admin user credentials. passwordHash: JDJ5JDEwJEdEOVFzZDE2VVhoVkR0N2VkUFBVM09nalc0QnNZaC9xc2E4bS9mcUJJcEZXenp5OUpjMi91Cgo= # PasswordHash is the base64-encoded bcrypt hash of the password to use for authentication. username: cerbos # Username is the hardcoded username to use for authentication. enabled: true # Enabled defines whether the admin API is enabled. advanced: # Advanced server settings. grpc: # GRPC server settings. connectionTimeout: 60s # ConnectionTimeout sets the timeout for establishing a new connection. maxConcurrentStreams: 1024 # MaxConcurrentStreams sets the maximum concurrent streams per connection. Defaults to 1024. Set to 0 to allow the maximum possible number of streams. maxConnectionAge: 600s # MaxConnectionAge sets the maximum age of a connection. maxRecvMsgSizeBytes: 4194304 # MaxRecvMsgSizeBytes sets the maximum size of a single request message. Defaults to 4MiB. Affects performance and resource utilisation. http: # HTTP server settings. idleTimeout: 120s # IdleTimeout sets the keepalive timeout. readHeaderTimeout: 15s # ReadHeaderTimeout sets the timeout for reading request headers. readTimeout: 30s # ReadTimeout sets the timeout for reading a request. writeTimeout: 30s # WriteTimeout sets the timeout for writing a response. cors: # CORS defines the CORS configuration for the server. allowedHeaders: ['content-type'] # AllowedHeaders is the contents of the allowed-headers header. allowedOrigins: ['*'] # AllowedOrigins is the contents of the allowed-origins header. disabled: false # Disabled sets whether CORS is disabled. maxAge: 10s # MaxAge is the max age of the CORS preflight check. grpcListenAddr: ":3593" # Required. GRPCListenAddr is the dedicated GRPC address. httpListenAddr: ":3592" # Required. HTTPListenAddr is the dedicated HTTP address. logRequestPayloads: false # LogRequestPayloads defines whether the request payloads should be logged. metricsEnabled: true # MetricsEnabled defines whether the metrics endpoint is enabled. requestLimits: # RequestLimits defines the limits for requests. maxActionsPerResource: 50 # MaxActionsPerResource sets the maximum number of actions that could be checked for a resource in a single request. maxResourcesPerRequest: 50 # MaxResourcesPerBatch sets the maximum number of resources that could be sent in a single request. tls: # TLS defines the TLS configuration for the server. caCert: /path/to/CA_certificate # CACert is the path to the optional CA certificate for verifying client requests. cert: /path/to/certificate # Cert is the path to the TLS certificate file. key: /path/to/private_key # Key is the path to the TLS private key file. udsFileMode: 0o766 # UDSFileMode sets the file mode of the unix domain sockets created by the server. storage: # This section is required. The field driver must be set to indicate which driver to use. driver: "disk" # Required. Driver defines which storage driver to use. blob: # This section is required only if storage.driver is blob. bucket: "s3://my-bucket-name?region=us-east-2" # Required. Bucket URL (Examples: s3://my-bucket?region=us-west-1 gs://my-bucket azblob://my-container). downloadTimeout: 30s # DownloadTimeout specifies the timeout for downloading from cloud storage. prefix: policies # Prefix specifies a subdirectory to download. requestTimeout: 10s # RequestTimeout specifies the timeout for an HTTP request. updatePollInterval: 15s # UpdatePollInterval specifies the interval to poll the cloud storage. Set to 0 to disable. workDir: ${HOME}/tmp/cerbos/work # WorkDir is the local path to check out policies to. disk: # This section is required only if storage.driver is disk. directory: pkg/test/testdata/store # Required. Directory is the path on disk where policies are stored. watchForChanges: false # Required. WatchForChanges enables watching the directory for changes. git: # This section is required only if storage.driver is git. branch: policies # Branch is the branch to checkout. checkoutDir: ${HOME}/tmp/cerbos/work # CheckoutDir is the local path to checkout the Git repo to. https: # HTTPS holds auth details for the HTTPS protocol. password: ${GITHUB_TOKEN} # The password (or token) to use for authentication. username: cerbos # The username to use for authentication. operationTimeout: 60s # OperationTimeout specifies the timeout for git operations. protocol: file # Required. Protocol is the Git protocol to use. Valid values are https, ssh, and file. ssh: # SSH holds auth details for the SSH protocol. password: pw # The password to the SSH private key. privateKeyFile: ${HOME}/.ssh/id_rsa # The path to the SSH private key file. user: git # The git user. Defaults to git. subDir: policies # SubDir is the path under the checked-out Git repo where the policies are stored. url: file://${HOME}/tmp/cerbos/policies # Required. URL is the URL to the Git repo. updatePollInterval: 60s # UpdatePollInterval specifies the interval to poll the Git repository for changes. Set to 0 to disable. hub: # This section is required only if storage.driver is hub. cacheSize: 1024 # CacheSize defines the number of policies to cache in memory. local: # Local holds configuration for local bundle source. bundlePath: /path/to/bundle.crbp # Required. BundlePath is the full path to the local bundle file. tempDir: ${TEMP} # TempDir is the directory to use for temporary files. remote: # Remote holds configuration for remote bundle source. Takes precedence over local if both are defined. bundleLabel: latest # Required. BundleLabel to fetch from the server. cacheDir: ${XDG_CACHE_DIR} # CacheDir is the directory to use for caching downloaded bundles. disableAutoUpdate: false # DisableAutoUpdate sets whether new bundles should be automatically downloaded and applied. tempDir: ${TEMP} # TempDir is the directory to use for temporary files. mysql: # This section is required only if storage.driver is mysql. connPool: maxLifeTime: 60m maxIdleTime: 45s maxOpen: 4 maxIdle: 1 connRetry: maxAttempts: 3 initialInterval: 0.5s maxInterval: 60s dsn: "user:password@tcp(localhost:3306)/db?interpolateParams=true" # Required. DSN is the data source connection string. serverPubKey: mykey: testdata/server_public_key.pem skipSchemaCheck: false # SkipSchemaCheck skips checking for required database tables on startup. tls: mytls: cert: /path/to/certificate key: /path/to/private_key caCert: /path/to/CA_certificate overlay: # This section is required only if storage.driver is overlay. baseDriver: blob # Required. BaseDriver is the default storage driver fallbackDriver: disk # Required. FallbackDriver is the secondary or fallback storage driver fallbackErrorThreshold: 5 # FallbackErrorThreshold is the max number of errors we allow within the fallbackErrorWindow period fallbackErrorWindow: 5m # FallbackErrorWindow is the cyclic period within which we aggregate failures postgres: # This section is required only if storage.driver is postgres. connPool: maxLifeTime: 60m maxIdleTime: 45s maxOpen: 4 maxIdle: 1 connRetry: maxAttempts: 3 initialInterval: 0.5s maxInterval: 60s skipSchemaCheck: false # SkipSchemaCheck skips checking for required database tables on startup. url: "postgres://user:password@localhost:port/db" # Required. URL is the Postgres connection URL. See https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNSTRING sqlite3: # This section is required only if storage.driver is sqlite3. dsn: ":memory:?_fk=true" # Required. Data source name telemetry: disabled: false # Disabled sets whether telemetry collection is disabled or not. reportInterval: 1h # ReportInterval is the interval between telemetry pings. stateDir: ${HOME}/.config/cerbos # StateDir is used to persist state to avoid repeatedly sending the data over and over again. ``` Observability ==================== Cerbos is designed from the ground up to be cloud native and has first-class support for observability via OpenTelemetry metrics and distributed traces. ## [](#metrics)Metrics By default, Cerbos exposes a metrics endpoint at `/_cerbos/metrics` that can be scraped by Prometheus or other metrics scrapers that support the Prometheus metrics format. This endpoint can be disabled by setting `server.metricsEnabled` configuration value to `false` (see [Server block](server.html)). Cerbos also has support for OpenTelemetry protocol (OTLP) push metrics. It can be configured using [OpenTelemetry environment variables](https://opentelemetry.io/docs/specs/otel/configuration/sdk-environment-variables/). The following environment variables are supported. | Environment variable | Description | | ----------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------- | | OTEL\_EXPORTER\_OTLP\_METRICS\_ENDPOINT or OTEL\_EXPORTER\_OTLP\_ENDPOINT | Address of the OTLP metrics receiver (for example: ). If not defined, OTLP metrics are disabled. | | OTEL\_EXPORTER\_OTLP\_METRICS\_INSECURE or OTEL\_EXPORTER\_OTLP\_INSECURE | Skip validating the TLS certificate of the endpoint | | OTEL\_EXPORTER\_OTLP\_METRICS\_CERTIFICATE or OTEL\_EXPORTER\_OTLP\_CERTIFICATE | Path to the certificate to use for validating the server’s TLS credentials. | | OTEL\_EXPORTER\_OTLP\_METRICS\_CLIENT\_CERTIFICATE or OTEL\_EXPORTER\_OTLP\_CLIENT\_CERTIFICATE | Path to the client certificate to use for mTLS | | OTEL\_EXPORTER\_OTLP\_METRICS\_CLIENT\_KEY or OTEL\_EXPORTER\_OTLP\_CLIENT\_KEY | Path to the client key to use for mTLS | | OTEL\_EXPORTER\_OTLP\_METRICS\_PROTOCOL or OTEL\_EXPORTER\_OTLP\_PROTOCOL | OTLP protocol. Supported values are grpc and http/protobuf. Defaults to grpc. | | OTEL\_METRIC\_EXPORT\_INTERVAL | The export interval in milliseconds. Defaults to 60000. | | OTEL\_METRIC\_EXPORT\_TIMEOUT | Timeout for exporting the data in milliseconds. Defaults to 30000. | | OTEL\_METRICS\_EXPORTER | Set to otlp to enable the OTLP exporter. Defaults to prometheus. | Refer to for more information about exporter configuration through environment variables. Note that the OpenTelemetry Go SDK used by Cerbos might not have full support for some of the environment variables listed on the OpenTelemetry specification. | | OTEL\_METRICS\_EXPORTER and OTEL\_EXPORTER\_OTLP\_METRICS\_ENDPOINT are the only required environment variables to enable OTLP metrics. | | ------------------------------------------------------------------------------------------------------------------------------------------ | ## [](#traces)Traces Cerbos supports distributed tracing to provide insights into application performance and request lifecycle. Traces from Cerbos can be exported to any compatible collector that supports the OpenTelemetry protocol (OTLP). Trace configuration should be done using [OpenTelemetry environment variables](https://opentelemetry.io/docs/specs/otel/configuration/sdk-environment-variables/). The following environment variables are supported. | | If you are upgrading from a Cerbos version older than 0.33.0, refer to [migration instructions](tracing.html#migration) for information about mapping file-based configuration to environment variables. | | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | Environment variable | Description | | ---------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | OTEL\_SERVICE\_NAME | Service name reported in the traces. Defaults to cerbos. | | OTEL\_TRACES\_SAMPLER | [Trace sampler](https://opentelemetry.io/docs/specs/otel/trace/sdk/#sampling). Defaults to parentbased\_always\_off. Supported values: always\_on Record every trace. always\_off Don’t record any traces. traceidratio Record a fraction of traces based on ID. Set OTEL\_TRACES\_SAMPLER\_ARG to a value between 0 and 1 to define the fraction. parentbased\_always\_on Record all traces except those where the parent span is not sampled. parentbased\_always\_off Don’t record any traces unless the parent span is sampled. parentbased\_traceidratio Record a fraction of traces where the parent span is sampled. Set OTEL\_TRACES\_SAMPLER\_ARG to a value between 0 and 1 to define the fraction. | | OTEL\_TRACES\_SAMPLER\_ARG | Set the sampling ratio when OTEL\_TRACES\_SAMPLER is a ratio-based sampler. Defaults to 0.1. | | OTEL\_EXPORTER\_OTLP\_TRACES\_ENDPOINT or OTEL\_EXPORTER\_OTLP\_ENDPOINT | Address of the OTLP collector (for example: ). If not defined, traces are disabled. | | OTEL\_EXPORTER\_OTLP\_TRACES\_INSECURE or OTEL\_EXPORTER\_OTLP\_INSECURE | Skip validating the TLS certificate of the endpoint | | OTEL\_EXPORTER\_OTLP\_TRACES\_CERTIFICATE or OTEL\_EXPORTER\_OTLP\_CERTIFICATE | Path to the certificate to use for validating the server’s TLS credentials. | | OTEL\_EXPORTER\_OTLP\_TRACES\_CLIENT\_CERTIFICATE or OTEL\_EXPORTER\_OTLP\_CLIENT\_CERTIFICATE | Path to the client certificate to use for mTLS | | OTEL\_EXPORTER\_OTLP\_TRACES\_CLIENT\_KEY or OTEL\_EXPORTER\_OTLP\_CLIENT\_KEY | Path to the client key to use for mTLS | | OTEL\_EXPORTER\_OTLP\_TRACES\_PROTOCOL or OTEL\_EXPORTER\_OTLP\_PROTOCOL | OTLP protocol. Supported values are grpc and http/protobuf. Defaults to grpc. | Refer to for more information about exporter configuration through environment variables. Note that the OpenTelemetry Go SDK used by Cerbos might not have full support for some of the environment variables listed on the OpenTelemetry specification. | | OTEL\_EXPORTER\_OTLP\_TRACES\_ENDPOINT is the only required environment variable to enable OTLP trace exports. | | ----------------------------------------------------------------------------------------------------------------- | Schema block ==================== See [Schemas](../policies/schemas.html) for more information about schemas. ## [](#%5Fenforcement)Enforcement `enforcement` can be set to one of the following three values: * `none`: Do not validate requests using schemas * `warn`: Validate requests and log warnings when there are validation failures * `reject`: Deny the request if it fails validation ```yaml schema: enforcement: reject ``` Server block ==================== ## [](#%5Flisten%5Faddresses)Listen addresses By default the server will start an HTTP server on port `3592` and a gRPC server on `3593` that will listen on all available interfaces. Listen on all available interfaces (default) ```yaml server: httpListenAddr: ":3592" grpcListenAddr: ":3593" ``` Listen on a specific interface ```yaml server: httpListenAddr: "192.168.0.17:3592" grpcListenAddr: "192.168.0.17:3593" ``` Listen on a Unix domain socket ```yaml server: httpListenAddr: "unix:/var/sock/cerbos.http" grpcListenAddr: "unix:/var/sock/cerbos.grpc" ``` Listen on a Unix domain socket with specific file mode ```yaml server: httpListenAddr: "unix:/var/sock/cerbos.http" grpcListenAddr: "unix:/var/sock/cerbos.grpc" udsFileMode: 0o776 ``` ## [](#%5Fmetrics)Metrics By default, Prometheus metrics are available to scrape from the `/_cerbos/metrics` HTTP endpoint. If you want to disable metrics reporting, set `metricsEnabled` to `false`. ```yaml server: metricsEnabled: false ``` ## [](#%5Fpayload%5Flogging)Payload logging For debugging or auditing purposes, you can enable request and response payload logging for each request. | | Enabling this setting affects server performance and could expose potentially sensitive data contained in the requests to anyone with access to the server logs. | | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ```yaml server: logRequestPayloads: true ``` ## [](#%5Ftransport%5Flayer%5Fsecurity%5Ftls)Transport layer security (TLS) You can enable transport layer security (TLS) by defining the paths to the certificate and key file in the `TLS` section. ```yaml server: tls: cert: /path/to/certificate key: /path/to/private_key ``` | | For production use cases that require automatic certificate reloading, workload identities and other advanced features, we recommend running a proxy server such as [Envoy](https://www.envoyproxy.io), [Ghostunnel](https://github.com/ghostunnel/ghostunnel) or [Traefik](https://traefik.io) in front of the Cerbos server. | | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ## [](#%5Fcors)CORS By default, [CORS](https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS) is enabled on the HTTP service with all origins allowed. The default allowed headers are `accept`, `content-type`, `user-agent` and `x-requested-with`. You can disable CORS by setting `server.cors.disabled` to `true`. You can also restrict the list of allowed origins and headers by setting `server.cors.allowedOrigins` and `server.cors.allowedHeaders` respectively. ```yaml server: cors: allowedOrigins: - example.com - example.org allowedHeaders: - accept - content-type - user-agent - x-custom-header - x-requested-with ``` ## [](#request-limits)Request limits By default, each Cerbos API request can include a batch of 50 resources with up to 50 actions to be checked for each resource. This limit is in place to prevent the server from being overloaded by very large requests — which affects throughput and CPU,memory,I/O usage. | | Changing these settings could have a large impact on the performance and resource utilisation of Cerbos instances. | | --------------------------------------------------------------------------------------------------------------------- | ```yaml server: requestLimits: maxActionsPerResource: 50 maxResourcesPerRequest: 50 ``` ## [](#admin-api)Enable Admin API The [Cerbos Admin API](../api/admin%5Fapi.html) provides administration functions such as adding or updating policies (if the underlying storage engine supports it) to the running Cerbos instance. It is disabled by default. Authentication is mandatory for the Admin API. See [Cerbos Admin API documentation](../api/admin%5Fapi.html) for more details. | | TLS should be enabled to ensure that credentials are transmitted securely over the network. We also highly recommend changing the default username and password when deploying Cerbos. | | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ```yaml server: adminAPI: enabled: true adminCredentials: username: cerbos passwordHash: JDJ5JDEwJE5HYnk4cTY3VTE1bFV1NlR2bmp3ME9QOXdXQXFROGtBb2lWREdEY2xXbzR6WnoxYWtSNWNDCgo= ``` ### [](#password-hash)Generating a password hash Cerbos expects the password to be hashed with bcrypt and encoded with base64\. This can be achieved using the `htpasswd` and `base64` utilities available on most operating systems. ```sh echo "cerbosAdmin" | htpasswd -niBC 10 cerbos | cut -d ':' -f 2 | base64 -w0 ``` | | On MacOS, the base64 utility does not require the \-w0 argument. echo "cerbosAdmin" \| htpasswd -niBC 10 cerbos | cut -d ':' -f 2 | base64 | | ------------------------------------------------------------------------------------------------------------------ | --------------- | ------ | | | The output of the above command for a given password value is not deterministic. It will vary between invocations or between different machines. This is because the bcrypt algorithm uses a salt (random noise) to make password cracking harder. | | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | Storage block ==================== Cerbos supports multiple backends for storing policies. Which storage driver to use is defined by the `driver` setting. ## [](#blob-driver)Blob driver Cerbos policies can be stored in AWS S3, Google Cloud Storage, or any other S3-compatible storage systems such as [Minio](https://www.minio.io). Configuration keys * `bucket`: Required. A URL specifying the service (e.g. S3, GCS), the storage bucket and any other configuration parameters required by the provider. * AWS S3: `s3://my-bucket?region=us-west-1`. Must specify region in the URL. * Google Cloud Storage: `gs://my-bucket` * S3-compatible (e.g. Minio): `s3://my-bucket?endpoint=my.minio.local:8080&disableSSL=true&hostname_immutable=true®ion=local`. Must specify region in the URL. * `prefix`: Optional. Look for policies only under this key prefix. * `workDir`: Optional. Path to the local directory to download the policies to. Defaults to the system cache directory if not specified. * `updatePollInterval`: Optional. How frequently the blob store should be checked to discover new or updated policies. Defaults to 0 — which disables polling. * `requestTimeout`: Optional. HTTP request timeout. It takes an HTTP request to download a policy file. Defaults to 5s. * `downloadTimeout`: Optional. Timeout to download all policies from the the storage provider. Must be greater than the `requestTimeout`. Defaults to 60s. | | Setting the updatePollInterval to a low value could increase resource consumption in both the client and the server systems. Some managed service providers may even impose rate limits or temporary suspensions on your account if the number of requests is too high. | | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | Credentials for accessing the storage buckets are retrieved from the environment. The method of specifying credentials in the environment vary by cloud provider and security configuration. Usually, it involves defining environment variables such as `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` for S3 and `GOOGLE_APPLICATION_CREDENTIALS` for GCS. Refer to the relevant cloud provider documentation for more details. * AWS: * Google: AWS S3 ```yaml storage: driver: "blob" blob: bucket: "s3://my-bucket-name?region=us-east-2" prefix: policies workDir: ${HOME}/tmp/cerbos/work updatePollInterval: 15s downloadTimeout: 30s requestTimeout: 10s ``` Google Cloud Storage ```yaml storage: driver: "blob" blob: bucket: "gs://my-bucket-name" workDir: ${HOME}/tmp/cerbos/work updatePollInterval: 10s ``` Minio local container ```yaml storage: driver: "blob" blob: bucket: "s3://my-bucket-name?endpoint=localhost:9000&disableSSL=true&hostname_immutable=true®ion=local" workDir: ${HOME}/tmp/cerbos/work updatePollInterval: 10s ``` ## [](#disk-driver)Disk driver The disk driver is a way to serve the policies from a directory on the filesystem. Any `.yaml`, `.yml` or `.json` files in the directory tree rooted at the given path will be read and parsed as policies. Static fileset with no change detection ```yaml storage: driver: disk disk: directory: /etc/cerbos/policies ``` Dynamic fileset with change detection ```yaml storage: driver: disk disk: directory: /etc/cerbos/policies watchForChanges: true ``` | | On some platforms the automatic change detection feature can be inefficient and resource-intensive if the watched directory contains many files or gets updated frequently. | | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ### [](#disk-driver-archives)Archive files Alternatively, you can opt to archive and/or compress your policies directory into a Zip (`.zip`), Tar (`.tar`) or Gzip file (`.tgz` or `.tar.gz`). The archive is assumed to be laid out like a standard policy directory. It must contain no non-policy YAML files. You specify the file in your config like so: Archived fileset using a Zip file ```yaml storage: driver: disk disk: directory: /etc/cerbos/policies.zip ``` | | Change detection will be disabled when using archive files. | | -------------------------------------------------------------- | ## [](#git-driver)Git driver Git is the preferred method of storing Cerbos policies. The server is smart enough to detect when new commits are made to the git repository and refresh its state based on the changes. | | Azure DevOps repositories use a newer protocol that is currently not supported by the Git library used by Cerbos. We are working to address this issue. In the mean time, please consider using the Cerbos disk storage in conjunction with an external Git sync implementation such as or using a CI pipeline to publish your policies to another storage implementation supported by Cerbos. | | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | * Git repositories can be local (`file` protocol) or remote (`ssh` or `https`). Please note that the local `file` protocol requires `git` to be available and cannot be used with the Cerbos container. * If no `branch` is specified, the default branch would be the `master` branch. * If no `subDir` is specified, the entire repository would be scanned for policies (`.yaml`, `.yml` or `.json`). * The `checkoutDir` is the working directory of the server and must be writable by the server process. * If `updatePollInterval` is set to 0, the source repository will not be polled to pick up any new commits. * If `operationTimeout` is not specified, the default timeout for git operations is 60 seconds. | | If the git repository is remote, setting the updatePollInterval to a low value could increase resource consumption in both the client and the server systems. Some managed service providers may even impose rate limits or temporary suspensions on your account if the number of requests is too high. | | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | Local git repository ```yaml storage: driver: "git" git: protocol: file url: file://${HOME}/tmp/cerbos/policies checkoutDir: ${HOME}/tmp/cerbos/work updatePollInterval: 10s ``` Remote git repository accessed over HTTPS ```yaml storage: driver: "git" git: protocol: https url: https://github.com/cerbos/policy-test.git branch: main subDir: policies checkoutDir: ${HOME}/tmp/work/policies updatePollInterval: 60s operationTimeout: 30s https: username: cerbos password: ${GITHUB_TOKEN} ``` Remote git repository accessed over SSH ```yaml storage: driver: "git" git: protocol: ssh url: github.com:cerbos/policy-test.git branch: main subDir: policies checkoutDir: ${HOME}/tmp/cerbos/work updatePollInterval: 60s ssh: user: git privateKeyFile: ${HOME}/.ssh/id_rsa ``` ## [](#hub)Hub driver | | Requires a [Cerbos Hub](https://www.cerbos.dev/product-cerbos-hub) account. [![Try Cerbos Hub](../_images/try_cerbos_hub.png)](https://hub.cerbos.cloud) | | ----------------------------------------------------------------------------------------------------------------------------------------------------------- | Connects the PDP to a Cerbos Hub [deployment label](#cerbos-hub:ROOT:deployment-labels.adoc). Whenever a policy change is detected, the Cerbos Hub CI/CD pipeline compiles, tests and pushes an optimized policy bundle to the PDP. If you are new to Cerbos Hub, follow the [getting started guide](../../../cerbos-hub/getting-started.html). For more information about configuring a PDP to connect to Cerbos Hub, refer to the [Service PDP documentation](#cerbos-hub:ROOT:decision-points-service.adoc). ## [](#mysql)MySQL driver The MySQL storage backend is one of the dynamic stores that supports adding or updating policies at runtime through the [Admin API](server.html#admin-api). | | The [cerbosctl utility](../cli/cerbosctl.html) is a handy way to interact with the Admin API and supports loading policies through the [built-in put command](../cli/cerbosctl.html#put). | | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | | Cerbos has an in-memory cache for holding compiled policy definitions to speed up the evaluation process. When a policy is removed or updated using the [Admin API](../api/admin%5Fapi.html#policy-management) this cache is updated by the instance that handles the request. However, if you share the database with multiple Cerbos instances, the other instances won’t be aware of the change and might still have the old policy definition cached in memory. There are two ways to handle this situation. By default, the cache entries are stored indefinitely until there’s memory pressure. You can set a maximum cache duration for entries by setting the compile.cacheDuration configuration value. This could help make all the Cerbos instances to become eventually consistent within a time frame that’s acceptable to you. Setting the compile.cacheDuration to a low value helps to reach in an eventually consistent state quicker. Invoke the [/admin/store/reload API endpoint](../api/admin%5Fapi.html#store-management) on all the Cerbos instances whenever you make a change to your policies. | | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | | Unlike the SQLite3 driver, the tables and other database objects are not created automatically by the Cerbos MySQL driver. This is to minimize the privileges the Cerbos instance has on the MySQL installation. You must create the required tables using the provided script before configuring Cerbos to connect to the database. | | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | The driver configuration expects the connection details to be provided as a DSN in the following form: [username[:password]@][protocol[(address)]]/dbname[?param1=value1&...¶mN=valueN] See for the list of supported parameters. You can use environment variable references in the URL to avoid storing credentials as part of the Cerbos configuration file. Using MySQL as a storage backend for Cerbos ```yaml storage: driver: "mysql" mysql: dsn: "${MYSQL_USER}:${MYSQL_PASSWORD}@tcp(localhost:3306)/cerbos" ``` ### [](#%5Fsecure%5Fconnections)Secure connections If your MySQL server requires TLS or if you want to use RSA key pair-based password exchange, you can configure those settings as follows: TLS certificates ```yaml storage: driver: "mysql" mysql: dsn: "${MYSQL_USER}:${MYSQL_PASSWORD}@tcp(localhost:3306)/cerbos?tls=mysecuretls" tls: mysecuretls: caCert: /path/to/ca_certificate.crt cert: /path/to/certificate.crt key: /path/to/private.key ``` Server public key ```yaml storage: driver: "mysql" mysql: dsn: "${MYSQL_USER}:${MYSQL_PASSWORD}@tcp(localhost:3306)/cerbos?serverPubKey=mypubkey" serverPubKey: mypubkey: /path/to/server_public_key.pem ``` ### [](#%5Fconnection%5Fpool)Connection pool Cerbos uses a connection pool when connecting to a database. You can configure the connection pool settings by adding a `connPool` section to the driver configuration. Available options are: `maxLifeTime` The maximum length of time a connection can be reused for. This is useful when your database enforces a maximum lifetime on connections or if you have a load balancer in front of your database to spread the load. `maxIdleTime` How long a connection should be idle for before it is closed. Useful if you want to cleanup idle connections quickly. `maxOpen` Maximum number of connections that can be open at any given time (including idle connections). `maxIdle` Maximum number of idle connections that can be open at any given time. | | Connection pool settings can have a significant impact on the performance of Cerbos and your database server. Make sure you fully understand the implications of updating these settings before making any changes. | | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ```yaml storage: driver: "mysql" mysql: dsn: "${MYSQL_USER}:${MYSQL_PASSWORD}@tcp(localhost:3306)/cerbos" connPool: maxLifeTime: 5m maxIdleTime: 3m maxOpen: 10 maxIdle: 5 ``` ### [](#%5Fconnection%5Fretries)Connection retries Cerbos attempts to connect to the database on startup and exits if connection cannot be established after three attempts. You can configure the connection retry settings using the `connRetry` options. `maxAttempts` Maximum number of connection attempts before giving up `initialInterval` The time to wait before the second connection attempt. Subsequent attempts have increasing wait times (exponential backoff) derived from a combination of this value and the retry attempt number `maxInterval` Maximum amount of time to wait between retries. This affects the maximum value produced by the exponential backoff algorithm. | | Changing the retry settings affect the availability of Cerbos and the time it takes to detect and recover from a failure. For example, if the database connection details are incorrect or have changed, it will take longer for a Cerbos PDP to fail on startup because of retries. | | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ### [](#mysql-schema)Database object definitions You can customise the script below to suit your environment. Make sure to specify a strong password for the `cerbos_user` user. ```sql CREATE DATABASE IF NOT EXISTS cerbos CHARACTER SET utf8mb4; USE cerbos; CREATE TABLE IF NOT EXISTS policy ( id BIGINT PRIMARY KEY, kind VARCHAR(128) NOT NULL, name VARCHAR(1024) NOT NULL, version VARCHAR(128) NOT NULL, scope VARCHAR(512), description TEXT, disabled BOOLEAN default false, definition BLOB); CREATE TABLE IF NOT EXISTS policy_dependency ( policy_id BIGINT NOT NULL, dependency_id BIGINT NOT NULL, PRIMARY KEY (policy_id, dependency_id), FOREIGN KEY (policy_id) REFERENCES policy(id) ON DELETE CASCADE); CREATE TABLE IF NOT EXISTS policy_ancestor ( policy_id BIGINT NOT NULL, ancestor_id BIGINT NOT NULL, PRIMARY KEY (policy_id, ancestor_id), FOREIGN KEY (policy_id) REFERENCES policy(id) ON DELETE CASCADE); CREATE TABLE IF NOT EXISTS policy_revision ( revision_id INTEGER AUTO_INCREMENT PRIMARY KEY, action ENUM('INSERT', 'UPDATE', 'DELETE'), id BIGINT NOT NULL, kind VARCHAR(128), name VARCHAR(1024), version VARCHAR(128), scope VARCHAR(512), description TEXT, disabled BOOLEAN, definition BLOB, update_timestamp TIMESTAMP DEFAULT CURRENT_TIMESTAMP); CREATE TABLE IF NOT EXISTS attr_schema_defs ( id VARCHAR(255) PRIMARY KEY, definition JSON); DROP TRIGGER IF EXISTS policy_on_insert; CREATE TRIGGER policy_on_insert AFTER INSERT ON policy FOR EACH ROW INSERT INTO policy_revision(action, id, kind, name, version, scope, description, disabled, definition) VALUES('INSERT', NEW.id, NEW.kind, NEW.name, NEW.version, NEW.scope, NEW.description, NEW.disabled, NEW.definition); DROP TRIGGER IF EXISTS policy_on_update; CREATE TRIGGER policy_on_update AFTER UPDATE ON policy FOR EACH ROW INSERT INTO policy_revision(action, id, kind, name, version, scope, description, disabled, definition) VALUES('UPDATE', NEW.id, NEW.kind, NEW.name, NEW.version, NEW.scope, NEW.description, NEW.disabled, NEW.definition); DROP TRIGGER IF EXISTS policy_on_delete; CREATE TRIGGER policy_on_delete AFTER DELETE ON policy FOR EACH ROW INSERT INTO policy_revision(action, id, kind, name, version, scope, description, disabled, definition) VALUES('DELETE', OLD.id, OLD.kind, OLD.name, OLD.version, OLD.scope, OLD.description, OLD.disabled, OLD.definition); CREATE USER IF NOT EXISTS cerbos_user IDENTIFIED BY 'changeme'; GRANT SELECT,INSERT,UPDATE,DELETE ON cerbos.policy TO cerbos_user; GRANT SELECT,INSERT,UPDATE,DELETE ON cerbos.attr_schema_defs TO cerbos_user; GRANT SELECT,INSERT,UPDATE,DELETE ON cerbos.policy_dependency TO cerbos_user; GRANT SELECT,INSERT,UPDATE,DELETE ON cerbos.policy_ancestor TO cerbos_user; GRANT SELECT,INSERT ON cerbos.policy_revision TO cerbos_user; ``` ## [](#overlay)Overlay driver You can provide redundancy by configuring an `overlay` driver, which wraps a `base` and a `fallback` driver. Under normal operation, the base driver will be targeted as usual. However, if the driver consistently errors, the PDP will start targeting the fallback driver instead. The fallback is determined by a configurable [circuit breaker pattern](https://learn.microsoft.com/en-us/previous-versions/msp-n-p/dn589784%28v=pandp.10%29). You can configure the fallback error threshold and the fallback error window to determine how many errors can occur within a rolling window before the circuit breaker is tripped. ```yaml storage: driver: "overlay" overlay: baseDriver: postgres fallbackDriver: disk fallbackErrorThreshold: 5 # number of errors that occur within the fallbackErrorWindow to trigger failover fallbackErrorWindow: 5s # the rolling window in which errors are aggregated disk: directory: policies watchForChanges: true postgres: url: "postgres://${PG_USER}:${PG_PASSWORD}@localhost:5432/postgres?sslmode=disable&search_path=cerbos" ``` | | The overlay driver assumes the same interface as the base driver. Any operations that are available on the base driver but not the fallback driver will error if the circuit breaker is open and the fallback driver is being targeted. Likewise, even if the fallback driver supports additional operations compared to the base driver, these will still not be available should failover occur. | | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ## [](#postgres)Postgres driver The Postgres storage backend is one of the dynamic stores that supports adding or updating policies at runtime through the [Admin API](server.html#admin-api). | | The [cerbosctl utility](../cli/cerbosctl.html) is a handy way to interact with the Admin API and supports loading policies through the [built-in put command](../cli/cerbosctl.html#put). | | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | | Cerbos has an in-memory cache for holding compiled policy definitions to speed up the evaluation process. When a policy is removed or updated using the [Admin API](../api/admin%5Fapi.html#policy-management) this cache is updated by the instance that handles the request. However, if you share the database with multiple Cerbos instances, the other instances won’t be aware of the change and might still have the old policy definition cached in memory. There are two ways to handle this situation. By default, the cache entries are stored indefinitely until there’s memory pressure. You can set a maximum cache duration for entries by setting the compile.cacheDuration configuration value. This could help make all the Cerbos instances to become eventually consistent within a time frame that’s acceptable to you. Setting the compile.cacheDuration to a low value helps to reach in an eventually consistent state quicker. Invoke the [/admin/store/reload API endpoint](../api/admin%5Fapi.html#store-management) on all the Cerbos instances whenever you make a change to your policies. | | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | | Unlike the SQLite3 driver, the tables and other database objects are not created automatically by the Cerbos Postgres driver. This is to minimize the privileges the Cerbos instance has on the Postgres installation. You must create the required tables using the provided script before configuring Cerbos to connect to the database. | | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | The driver configuration expects the connection details to be provided as connection URL. See [Postgres connstring documentation](https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNSTRING) for more information. Use the `search_path` parameter to point to the schema containing the Cerbos tables. You can use environment variable references in the URL to avoid storing credentials as part of the Cerbos configuration file. Using Postgres as a storage backend for Cerbos ```yaml storage: driver: "postgres" postgres: url: "postgres://${PG_USER}:${PG_PASSWORD}@localhost:5432/postgres?sslmode=disable&search_path=cerbos" ``` ### [](#%5Fconnection%5Fpool%5F2)Connection pool Cerbos uses a connection pool when connecting to a database. You can configure the connection pool settings by adding a `connPool` section to the driver configuration. Available options are: `maxLifeTime` The maximum length of time a connection can be reused for. This is useful when your database enforces a maximum lifetime on connections or if you have a load balancer in front of your database to spread the load. `maxIdleTime` How long a connection should be idle for before it is closed. Useful if you want to cleanup idle connections quickly. `maxOpen` Maximum number of connections that can be open at any given time (including idle connections). `maxIdle` Maximum number of idle connections that can be open at any given time. | | Connection pool settings can have a significant impact on the performance of Cerbos and your database server. Make sure you fully understand the implications of updating these settings before making any changes. | | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ```yaml storage: driver: "postgres" postgres: url: "postgres://${PG_USER}:${PG_PASSWORD}@localhost:5432/postgres?sslmode=disable&search_path=cerbos" connPool: maxLifeTime: 5m maxIdleTime: 3m maxOpen: 10 maxIdle: 5 ``` ### [](#%5Fconnection%5Fretries%5F2)Connection retries Cerbos attempts to connect to the database on startup and exits if connection cannot be established after three attempts. You can configure the connection retry settings using the `connRetry` options. `maxAttempts` Maximum number of connection attempts before giving up `initialInterval` The time to wait before the second connection attempt. Subsequent attempts have increasing wait times (exponential backoff) derived from a combination of this value and the retry attempt number `maxInterval` Maximum amount of time to wait between retries. This affects the maximum value produced by the exponential backoff algorithm. | | Changing the retry settings affect the availability of Cerbos and the time it takes to detect and recover from a failure. For example, if the database connection details are incorrect or have changed, it will take longer for a Cerbos PDP to fail on startup because of retries. | | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ### [](#postgres-schema)Database object definitions You can customise the script below to suit your environment. Make sure to specify a strong password for the `cerbos_user` user. ```sql CREATE SCHEMA IF NOT EXISTS cerbos; SET search_path TO cerbos; CREATE TABLE IF NOT EXISTS policy ( id bigint NOT NULL PRIMARY KEY, kind VARCHAR(128) NOT NULL, name VARCHAR(1024) NOT NULL, version VARCHAR(128) NOT NULL, scope VARCHAR(512), description TEXT, disabled BOOLEAN default false, definition BYTEA ); CREATE TABLE IF NOT EXISTS policy_dependency ( policy_id BIGINT, dependency_id BIGINT, PRIMARY KEY (policy_id, dependency_id), FOREIGN KEY (policy_id) REFERENCES cerbos.policy(id) ON DELETE CASCADE ); CREATE TABLE IF NOT EXISTS policy_ancestor ( policy_id BIGINT, ancestor_id BIGINT, PRIMARY KEY (policy_id, ancestor_id), FOREIGN KEY (policy_id) REFERENCES cerbos.policy(id) ON DELETE CASCADE ); CREATE TABLE IF NOT EXISTS policy_revision ( revision_id SERIAL PRIMARY KEY, action VARCHAR(64), id BIGINT, kind VARCHAR(128), name VARCHAR(1024), version VARCHAR(128), scope VARCHAR(512), description TEXT, disabled BOOLEAN, definition BYTEA, update_timestamp TIMESTAMPTZ DEFAULT CURRENT_TIMESTAMP ); CREATE TABLE IF NOT EXISTS attr_schema_defs ( id VARCHAR(255) PRIMARY KEY, definition JSON ); CREATE OR REPLACE FUNCTION process_policy_audit() RETURNS TRIGGER AS $policy_audit$ BEGIN IF (TG_OP = 'DELETE') THEN INSERT INTO policy_revision(action, id, kind, name, version, scope, description, disabled, definition) VALUES('DELETE', OLD.id, OLD.kind, OLD.name, OLD.version, OLD.scope, OLD.description, OLD.disabled, OLD.definition); ELSIF (TG_OP = 'UPDATE') THEN INSERT INTO policy_revision(action, id, kind, name, version, scope, description, disabled, definition) VALUES('UPDATE', NEW.id, NEW.kind, NEW.name, NEW.version, NEW.scope, NEW.description, NEW.disabled, NEW.definition); ELSIF (TG_OP = 'INSERT') THEN INSERT INTO policy_revision(action, id, kind, name, version, scope, description, disabled, definition) VALUES('INSERT', NEW.id, NEW.kind, NEW.name, NEW.version, NEW.scope, NEW.description, NEW.disabled, NEW.definition); END IF; RETURN NULL; END; $policy_audit$ LANGUAGE plpgsql; CREATE TRIGGER policy_audit AFTER INSERT OR UPDATE OR DELETE ON policy FOR EACH ROW EXECUTE PROCEDURE process_policy_audit(); CREATE USER cerbos_user WITH PASSWORD 'changeme'; GRANT CONNECT ON DATABASE postgres TO cerbos_user; GRANT USAGE ON SCHEMA cerbos TO cerbos_user; GRANT SELECT,INSERT,UPDATE,DELETE ON cerbos.policy, cerbos.policy_dependency, cerbos.policy_ancestor, cerbos.attr_schema_defs TO cerbos_user; GRANT SELECT,INSERT ON cerbos.policy_revision TO cerbos_user; GRANT USAGE,SELECT ON cerbos.policy_revision_revision_id_seq TO cerbos_user; ``` ## [](#sqlite3)SQLite3 driver The SQLite3 storage backend is one of the dynamic stores that supports adding or updating policies at runtime through the [Admin API](server.html#admin-api). | | The [cerbosctl utility](../cli/cerbosctl.html) is a handy way to interact with the Admin API and supports loading policies through the [built-in put command](../cli/cerbosctl.html#put). | | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | In-memory ephemeral database ```yaml storage: driver: "sqlite3" sqlite3: dsn: "file::memory:?cache=shared" ``` | | Cerbos uses a database connection pool which would result in unexpected behaviour when using the SQLite:memory: database. Use file::memory:?cache=shared instead. See for details. | | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | On-disk persistent database ```yaml storage: driver: "sqlite3" sqlite3: dsn: "file:/tmp/cerbos.sqlite?mode=rwc&cache=shared&_fk=true" ``` Telemetry ==================== Cerbos developers rely on anonymous usage data to help prioritise new features and improve the product. The information collected is completely anonymous, never shared with external entities, and you can opt out at any time. ## [](#%5Fwhat%5Fkind%5Fof%5Fdata%5Fis%5Fcollected)What kind of data is collected? * Cerbos build information like version, commit and build date * Operating system type and architecture * Enabled Cerbos features (storage backend type and schema enforcement level are some examples of this information) * Aggregated statistics about the policies like the total number of policies and average number of rules in a policy * Aggregated statistics about Cerbos API calls and the gRPC user agents. You can view the full schema of the telemetry data on [GitHub](https://github.com/cerbos/cerbos/tree/main/api/public/cerbos/telemetry/v1/telemetry.proto) or on the [Buf schema registry](https://buf.build/cerbos/cerbos-api/docs/main/cerbos.telemetry.v1). We use [Rudderstack](https://www.rudderstack.com) to collect the data. Only a small number of Zenauth (the company leading the development of Cerbos) employees have access to the data. ## [](#%5Fhow%5Fto%5Fdisable%5Ftelemetry%5Fcollection)How to disable telemetry collection There are multiple ways in which you can disable telemetry collection. ### [](#%5Fuse%5Fthe%5Fconfiguration%5Ffile)Use the configuration file Set `telemetry.disabled: true` in the [Cerbos configuration file](index.html). ```yaml telemetry: disabled: true ``` ### [](#%5Fset%5Fan%5Fenvironment%5Fvariable)Set an environment variable Set `CERBOS_NO_TELEMETRY=1` or `CERBOS_NO_TELEMETRY=true` in your environment. We also honour the `DO_NOT_TRACK` environment variable if it exists. With the binary ```sh CERBOS_NO_TELEMETRY=1 ./cerbos server --config=/path/to/.cerbos.yaml ``` With the container ```sh docker run -i -t -p 3592:3592 \ -e CERBOS_NO_TELEMETRY=true \ ghcr.io/cerbos/cerbos:0.45.1 ``` ### [](#%5Fthrough%5Fthe%5Fcommand%5Fline)Through the command line Start Cerbos with `--set=telemetry.disabled=true` command line flag. With the binary ```sh ./cerbos server --config=/path/to/.cerbos.yaml --set=telemetry.disabled=true ``` With the container ```sh docker run -i -t -p 3592:3592 \ ghcr.io/cerbos/cerbos:0.45.1 \ server --set=telemetry.disabled=true ``` Tracing block ==================== | | The tracing block was deprecated in Cerbos 0.32.0 and removed in Cerbos 0.33.0\. Refer to [observability configuration](observability.html#traces) for information about configuring traces. | | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ## [](#migration)Migrating tracing configuration from previous Cerbos versions From Cerbos 0.32.0, the preferred method of trace configuration is through the OpenTelemetry environment variables described in [observability configuration](observability.html#traces). The `tracing` section is no longer supported by Cerbos versions starting from 0.33.0\. Native Jaeger protocol is superseded by OTLP as well and no longer supported. Follow the instructions below to migrate your existing configuration. | Configuration setting | New configuration | | ---------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | tracing.serviceName | Set OTEL\_SERVICE\_NAME environment variable | | tracing.sampleProbability | Set OTEL\_TRACES\_SAMPLER to parentbased\_traceidratio and OTEL\_TRACES\_SAMPLER\_ARG to the probability value | | tracing.jaeger.agentEndpoint or tracing.jaeger.collectorEndpoint | Jaeger now has [stable support for OTLP](https://www.jaegertracing.io/docs/1.51/apis/#opentelemetry-protocol-stable) and is the recommended way to send traces. Set OTEL\_EXPORTER\_OTLP\_TRACES\_ENDPOINT to the address of your Jaeger instance (for example: ) and, optionally, set OTEL\_EXPORTER\_OTLP\_TRACES\_INSECURE=true if Jaeger is using a self-signed certificate. If you want to use the HTTP API or customize other aspects, refer to the documentation above for other supported environment variables. | | tracing.otlp.collectorEndpoint | Set OTEL\_EXPORTER\_OTLP\_TRACES\_ENDPOINT to the value of the collector endpoint and OTEL\_EXPORTER\_OTLP\_INSECURE=true to emulate the behaviour of Cerbos OTLP exporter before version 0.32.0. | Deploy Cerbos to Cloud platforms ==================== ## [](#%5Faws%5Fmarketplace)AWS Marketplace Cerbos is avaliable via the [AWS Marketplace](https://aws.amazon.com/marketplace/pp/prodview-6kkahbtwv3gtq) and can be deployed in either [Elastic Kubernetes Service (EKS)](https://aws.amazon.com/eks/) or [Elastic Container Service (ECS)](https://aws.amazon.com/ecs/). When deploying Cerbos via the Marketplace, your Cerbos Hub account is included with the purchase via AWS and no additional paid account is required. ### [](#%5Felastic%5Fkubernetes%5Fservice%5Feks)Elastic Kubernetes Service (EKS) #### [](#%5Fstep%5F1%5Fcreate%5Fan%5Fiam%5Fpolicy)Step 1: Create an IAM policy To deploy Cerbos from AWS Marketplace, you need to assign an IAM policy with appropriate IAM permission to a Kubernetes service account before starting the deployment. You can either use AWS managed policy `arn:aws:iam::aws:policy/AWSMarketplaceMeteringRegisterUsage` or create your own IAM policy. Here’s an example IAM policy: ```json { "Version": "2012-10-17", "Statement": [ { "Action": [ "aws-marketplace:RegisterUsage" ], "Effect": "Allow", "Resource": "*" } ] } ``` #### [](#%5Fstep%5F2%5Fcreate%5Fan%5Fiam%5Frole%5Ffor%5Fthe%5Fkubernetes%5Fservice%5Faccount%5Firsa)Step 2: Create an IAM role for the Kubernetes service account (IRSA) Once the IAM role has been created, a Kubernetes service account needs to be created and assicated with the role. We recommend doing this via [eksctl](https://docs.aws.amazon.com/eks/latest/userguide/eksctl.html). The command below automates the process to: 1. Create an IAM role with AWS-managed IAM policy (or you can provide your own ARN). 2. Create a Kubernetes service account name `cerbos-serviceaccount` in the cluster. 3. Set up a trust relationship between the IAM role and the service account. 4. Modify `cerbos-serviceaccount` annotation to associate it with the created IAM role Remember to replace `CLUSTER_NAME` with your actual Amazon EKS cluster name and optionally set the namespace. ```sh eksctl create iamserviceaccount \ --name cerbos-serviceaccount \ --attach-policy-arn arn:aws:iam::aws:policy/AWSMarketplaceMeteringRegisterUsage \ --namespace default \ --cluster CLUSTER_NAME \ --approve \ --override-existing-serviceaccounts ``` #### [](#%5Fstep%5F4%5Fdeploy%5Fcerbos%5Fwith%5Fthe%5Fservice%5Faccount)Step 4: Deploy Cerbos with the service account | | Requires a [Cerbos Hub](https://www.cerbos.dev/product-cerbos-hub) account. [![Try Cerbos Hub](../_images/try_cerbos_hub.png)](https://hub.cerbos.cloud) | | ----------------------------------------------------------------------------------------------------------------------------------------------------------- | For the following steps, you need a Cerbos Hub account with a workspace connected to your policy repository and a set of client credentials. See the [Cerbos Hub getting started guide](../../../cerbos-hub/getting-started.html) for details. * Create a new Kubernetes secret to hold the Cerbos Hub credentials - see the [Cerbos Hub guide](../../../cerbos-hub/getting-started.html) for details. ```sh kubectl create secret generic cerbos-hub-credentials \ --from-literal=CERBOS_HUB_CLIENT_ID=YOUR_CLIENT_ID \ (1) --from-literal=CERBOS_HUB_CLIENT_SECRET=YOUR_CLIENT_SECRET \ (2) --from-literal=CERBOS_HUB_WORKSPACE_SECRET=YOUR_WORKSPACE_SECRET (3) ``` | **1** | Client ID from the Cerbos Hub credential | | ----- | -------------------------------------------- | | **2** | Client secret from the Cerbos Hub credential | | **3** | Cerbos Hub workspace secret | Create a new values file named `hub-values.yaml` with the following contents: \# Assign the service account serviceAccount: name: cerbos-serviceaccount # Set Cerbos configuration cerbos: config: # Configure the Hub storage driver storage: driver: "hub" # Configure deployment label. Alternatively, add \`CERBOS\_HUB\_BUNDLE=\` to the secret you created above. hub: remote: bundleLabel: "YOUR\_LABEL" <1> # Configure the Hub audit backend audit: enabled: true <2> backend: "hub" hub: storagePath: /audit\_logs # Create environment variables from the secret. envFrom: - secretRef: name: cerbos-hub-credentials # Mount volume for locally buffering the audit logs. A persistent volume is recommended for production use cases. volumes: - name: cerbos-audit-logs emptyDir: {} volumeMounts: - name: cerbos-audit-logs mountPath: /audit\_logs | **1** | The label to watch for bundle updates. See [deployment labels documentation](#cerbos-hub:ROOT:deployment-labels.adoc) for details. | | ----- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | **2** | Enables audit log collection. See [Hub audit log collection documentation](../../../cerbos-hub/audit-log-collection.html) for information about masking sensitive fields and other advanced settings. | * Deploy Cerbos using the AWS Helm chart ```sh aws ecr get-login-password \ --region us-west-1 | helm registry login \ --username AWS \ --password-stdin 709825985650.dkr.ecr.us-west-1.amazonaws.com helm install cerbos oci://709825985650.dkr.ecr.us-east-1.amazonaws.com/cerbos/cerbos-aws-helm --values=hub-values.yaml ``` ### [](#%5Felastic%5Fcontainer%5Fservice%5Fecs)Elastic Container Service (ECS) #### [](#%5Fstep%5F1%5Fcreate%5Fecs%5Ftask%5Frole%5Fpolicy)Step 1: Create ECS Task Role policy To deploy Cerbos from AWS Marketplace, you need to create an ECS Task AIM Role with appropriate IAM permission before starting the deployment. You can either use AWS managed policy `arn:aws:iam::aws:policy/AWSMarketplaceMeteringRegisterUsage` or create your own IAM policy. Here’s an example IAM policy required - you will need the ARN for this role when defining the task. ```json { "Version": "2012-10-17", "Statement": [ { "Action": [ "aws-marketplace:RegisterUsage" ], "Effect": "Allow", "Resource": "*" } ] } ``` #### [](#%5Fstep%5F2%5Fcreate%5Fthe%5Ftask%5Fdefinition)Step 2: Create the task definition In the AWS console or the CLI, create the task using the following JSON definition, subsituting the values noted: ```json { "family": "cerbos", "containerDefinitions": [ { "name": "cerbos", "image": "709825985650.dkr.ecr.us-east-1.amazonaws.com/cerbos/cerbos:0.45.1", "cpu": 0, "portMappings": [ { "name": "cerbos-3592-tcp", "containerPort": 3592, "hostPort": 3592, "protocol": "tcp", "appProtocol": "http" }, { "name": "cerbos-3593-tcp", "containerPort": 3593, "hostPort": 3593, "protocol": "tcp" } ], "essential": true, "environment": [ { "name": "CERBOS_HUB_CLIENT_ID", "value": "YOUR_CLIENT_ID" <1> }, { "name": "CERBOS_HUB_CLIENT_SECRET", "value": "YOUR_CLIENT_SECRET" <2> }, { "name": "CERBOS_HUB_WORKSPACE_SECRET", "value": "YOUR_WORKSPACE_SECRET" <3> }, { "name": "CERBOS_HUB_BUNDLE", "value": "YOUR_LABEL" <4> } ], "command": [ "server", "--set=audit.enabled=true", <5> "--set=audit.backend=hub", "--set=audit.hub.storagePath=/tmp" ], "environmentFiles": [], "mountPoints": [], "volumesFrom": [], "ulimits": [], "healthCheck": { "command": [ "CMD", "/cerbos", "healthcheck" ], "interval": 30, "timeout": 5, "retries": 3, "startPeriod": 5 }, "systemControls": [] } ], "taskRoleArn": "TASK_ROLE_ARN", <6> "executionRoleArn": "TASK_EXECUTION_ROLE_ARN", <7> "networkMode": "awsvpc", "requiresCompatibilities": [ "FARGATE" ], "cpu": "1024", "memory": "3072", "runtimePlatform": { "cpuArchitecture": "X86_64", "operatingSystemFamily": "LINUX" } } ``` | **1** | Client ID from the Cerbos Hub credential | | ----- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | **2** | Client secret from the Cerbos Hub credential | | **3** | Cerbos Hub workspace secret | | **4** | The label to watch for bundle updates. See [deployment labels documentation](#cerbos-hub:ROOT:deployment-labels.adoc) for details. | | **5** | Enables audit log collection. See [Hub audit log collection documentation](../../../cerbos-hub/audit-log-collection.html) for information about masking sensitive fields and other advanced settings. | | **6** | The ARN for the custom ECS Task Role defined in Step 1. | | **7** | The ARN for the ECS Task Execution. The default is arn:aws:iam:::role/ecsTaskExecutionRole | #### [](#%5Fstep%5F4%5Flaunch%5Fa%5Fservice)Step 4: Launch a service Using the above task defintion, launch a service in your ECS Cluster. Take note to ensure the service is running attached to the security groups which your applications will be calling Cerbos from. ## [](#%5Ffly%5Fio)Fly.io You can deploy Cerbos on Fly.io as a [Fly Launch](https://fly.io/docs/apps) app. The following `fly.toml` file shows how to deploy Cerbos with healthchecks and metrics: ```toml app = '' (1) primary_region = '' (2) [build] image = 'ghcr.io/cerbos/cerbos:0.45.1' [[mounts]] source = 'policies' destination = '/policies' initial_size = '1GB' [[services]] protocol = '' internal_port = 3592 [[services.ports]] port = 3592 handlers = ['tls', 'http'] [[services.http_checks]] interval = '5s' timeout = '2s' grace_period = '5s' method = 'get' path = '/_cerbos/health' protocol = 'http' [[services]] protocol = '' internal_port = 3593 [[services.ports]] port = 3593 handlers = ['tls'] [services.ports.tls_options] alpn = ['h2'] [[vm]] memory = '1gb' cpu_kind = 'shared' cpus = 1 [metrics] port = 3592 path = "/_cerbos/metrics" ``` | **1** | The name of the [Fly App](https://fly.io/docs/apps) | | ----- | ----------------------------------------------------------------------------- | | **2** | Pick a Fly.io [region](https://fly.io/docs/reference/regions/#fly-io-regions) | The example above launches a Cerbos instance with the [minimal configuration](../configuration/index.html#minimal-configuration) using an empty [Fly volume](https://fly.io/docs/reference/volumes/) mounted as the policy directory. For production use cases, consider using one of the following methods for policy storage. | | Your host or service for an application should be listening on the right address within the VM: Fly Proxy reaches services through a private IPv4 address on each VM, so the process should listen on 0.0.0.0: ([but see A note on IPv4 and IPv6 wildcards](https://fly.io/docs/networking/app-services/#a-note-on-ipv4-and-ipv6-wildcards)). | | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | * Cerbos [git driver](../configuration/storage.html#git-driver) with a Git provider such as GitHub or GitLab * Cerbos [blob driver](../configuration/storage.html#blob-driver) with [Tigris](https://fly.io/docs/reference/tigris/#create-and-manage-a-tigris-storage-bucket) * Cerbos [sqlite3 driver](../configuration/storage.html#sqlite3) with a standalone SQLite database or [LiteFS](https://fly.io/docs/litefs/#litefs-cloud) * Cerbos [postgres driver](../configuration/storage.html#postgres) with [Fly Postgres](https://fly.io/docs/postgres/) * [Cerbos Hub](https://www.cerbos.dev/product-cerbos-hub) | | Cerbos can be [configured entirely from the command line](../configuration/index.html) using \--set flags. On the Fly.io platform, they can be set by overriding the cmd setting in the [experimental section](https://fly.io/docs/reference/configuration/#the-experimental-section) of the fly.toml file. | | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ### [](#%5Fusing%5Ftigris%5Fas%5Fa%5Fpolicy%5Frepository)Using Tigris as a policy repository Cerbos `blob` driver can be used with any S3-compatible blob storage backend such as [Tigris](https://fly.io/docs/reference/tigris). Create a storage bucket on Tigris. Refer to for more information about creating storage buckets. ```bash flyctl storage create ``` Note down the credentials for accessing the bucket and save them as application secrets. ```bash flyctl apps create (1) flyctl secrets set --app= AWS_ACCESS_KEY_ID=tid_XXXXXX (2) flyctl secrets set --app= AWS_SECRET_ACCESS_KEY=tsec_XXXXXX (3) ``` | **1** | Your application name on Fly.io | | ----- | ------------------------------- | | **2** | Tigris key ID | | **3** | Tigris secret access key | Create a `fly.toml` file. ```toml app = '' (1) primary_region = '' (2) [build] image = 'ghcr.io/cerbos/cerbos:0.45.1' [experimental] cmd = [ 'server', '--set', 'storage.driver=blob', '--set', 'storage.blob.bucket=s3://?endpoint=fly.storage.tigris.dev®ion=auto', (3) '--set', 'storage.blob.downloadTimeout=30s', '--set', 'storage.blob.prefix=policies', '--set', 'storage.blob.updatePollInterval=15s', '--set', 'storage.blob.workDir=/policies' ] [[mounts]] source = 'policies' destination = '/policies' initial_size = '1GB' [[services]] protocol = '' internal_port = 3592 auto_stop_machines = true [[services.ports]] port = 3592 handlers = ['tls', 'http'] [[services.http_checks]] interval = '5s' timeout = '2s' grace_period = '5s' method = 'get' path = '/_cerbos/health' protocol = 'http' [[services]] protocol = '' internal_port = 3593 auto_stop_machines = true [[services.ports]] port = 3593 handlers = ['tls'] [services.ports.tls_options] alpn = ['h2'] [[vm]] memory = '1gb' cpu_kind = 'shared' cpus = 1 [metrics] port = 3592 path = "/_cerbos/metrics" ``` | **1** | The name of the [Fly App](https://fly.io/docs/apps) | | ----- | ----------------------------------------------------------------------------- | | **2** | Pick a Fly.io [region](https://fly.io/docs/reference/regions/#fly-io-regions) | | **3** | Storage bucket name | Deploy the app. ```bash flyctl deploy ``` ### [](#%5Fusing%5Flitefs%5Fas%5Fa%5Fpolicy%5Frepository)Using LiteFS as a policy repository Fly.io’s distributed SQLite storage layer [LiteFS](https://fly.io/docs/litefs) can be used for policy storage using Cerbos' `sqlite3` driver. Start by creating an app on Fly.io. ```bash flyctl apps create ``` Create a LiteFS configuration file named `litefs.yml`. ```yaml data: dir: "/var/lib/litefs" exec: - cmd: "/cerbos server --set=storage.driver=sqlite3 --set=storage.sqlite3.dsn=file:/litefs/db --set=server.adminAPI.enabled=true --set=server.adminAPI.adminCredentials.username=$CERBOS_ADMIN_USER --set=server.adminAPI.adminCredentials.passwordHash=$CERBOS_ADMIN_PASSWORD_HASH" exit-on-error: false fuse: dir: "/litefs" lease: advertise-url: "http://${FLY_ALLOC_ID}.vm.${FLY_APP_NAME}.internal:20202" candidate: ${FLY_REGION == PRIMARY_REGION} consul: url: "${FLY_CONSUL_URL}" key: "${FLY_APP_NAME}/primary" promote: true type: "consul" ``` | | Refer to [Configuring LiteFS](https://fly.io/docs/litefs/getting-started-docker/#configuring-litefs) documentation for other available configuration parameters. | | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- | Create a Dockerfile. ```Dockerfile FROM flyio/litefs:0.5 AS litefs FROM ghcr.io/cerbos/cerbos:0.45.1 AS cerbos FROM alpine:3.16 AS base RUN apk add fuse3 sqlite ADD litefs.yml /etc/litefs.yml COPY --from=cerbos /cerbos /cerbos COPY --from=litefs /usr/local/bin/litefs /usr/local/bin/litefs ENTRYPOINT ["litefs"] CMD ["mount"] ``` Create a `fly.toml` file to launch Cerbos. ```toml app = '' (1) primary_region = '' (2) [build] dockerfile = "Dockerfile" [mounts] source = "litefs" destination = "/var/lib/litefs" (3) [[services]] protocol = '' internal_port = 3592 [[services.ports]] port = 3592 handlers = ['tls', 'http'] [[services.http_checks]] interval = '5s' timeout = '2s' grace_period = '5s' method = 'get' path = '/_cerbos/health' protocol = 'http' [[services]] protocol = '' internal_port = 3593 [[services.ports]] port = 3593 handlers = ['tls'] [services.ports.tls_options] alpn = ['h2'] [[vm]] memory = '1gb' cpu_kind = 'shared' cpus = 1 [metrics] port = 3592 path = "/_cerbos/metrics" ``` | **1** | The name of the [Fly App](https://fly.io/docs/apps) | | ----- | ---------------------------------------------------------------------- | | **2** | Pick a [region](https://fly.io/docs/reference/regions/#fly-io-regions) | | **3** | Destination must be equal to the one specified in the litefs.yaml | Create secrets to hold Cerbos Admin API credentials. Refer to [password hash generation instructions](../configuration/server.html#password-hash) to learn how to generate the password hash. ```bash flyctl secrets set CERBOS_ADMIN_USER= flyctl secrets set CERBOS_ADMIN_PASSWORD_HASH= ``` Attach to Consul to manage LiteFS leases. ```bash flyctl consul attach ``` | | See [lease configuration](https://fly.io/docs/litefs/getting-started-fly/#lease-configuration) for more information about Consul leases on Fly.io. | | ----------------------------------------------------------------------------------------------------------------------------------------------------- | Finally, deploy Cerbos. ```bash flyctl deploy ``` You can interact with the Cerbos [Admin API](../api/admin%5Fapi.html) using one of the Cerbos SDKs or the [cerbosctl](../cli/cerbosctl.html) utility to manage the policies stored on LiteFS. List policies with cerbosctl ```bash cerbosctl \ --server=.fly.dev:3593 \ --username= \ --password= \ get rp ``` Put a policy or a directory consisting of multiple policies with cerbosctl ```bash cerbosctl \ --server=.fly.dev:3593 \ --username= \ --password= \ put policies -R \ policy_dir ``` Cerbos deployment patterns ==================== Cerbos can be deployed as a service or as a sidecar. Which mode to choose depends on your requirements. ## [](#service)Service model ![Service model](_images/service_deployment.png) * Central policy decision point shared by a group of applications. * Cerbos can be upgraded independently from the applications — reducing maintenance overhead. * In a busy environment, careful capacity planning would be required to ensure that the central Cerbos endpoint does not become a bottleneck. ## [](#sidecar)Sidecar model ![Sidecar model](_images/sidecar_deployment.png) * Each application instance gets its own Cerbos instance — ensuring high performance and availability. * Upgrades to Cerbos would require a rolling update to all the application instances. * Policy updates could take slightly longer to propagate to all the individual application instances — resulting in a period where both the old and new policies are in effect at the same time. ## [](#daemonset)DaemonSet model ![DaemonSet model](_images/daemonset_deployment.png) * Each cluster node gets its own Cerbos instance — ensuring high performance and efficient resource usage. * Policy updates must roll out to nodes individually — resulting in a period where both the old and new policies are in effect at the same time. * When deployed as a daemonset the service `internalTrafficPolicy` defaults to `Local`. This causes all requests to the service to be forced to the local node for minimum latency. Upgrades to Cerbos could result in application seeing a short outage to the cerbos instance on their own node, client retries may be neccessary. If this is unacceptable you can set `service.internalTrafficPolicy` to `Cluster`. You may be able to improve availability via the `service.kubernetes.io/topology-mode: Auto` annotation. Deploy Cerbos as DaemonSet ==================== You can use the [Cerbos Helm chart](../installation/helm.html) to deploy Cerbos as a daemonset inside your Kubernetes cluster by setting the Helm `type` value to `daemonset`. By default, the [internal traffic policy](#https://kubernetes.io/docs/concepts/services-networking/service-traffic-policy/) is set to `Local`. You can change this by setting `service.internalTrafficPolicy` explicitly. Refer to the [Helm chart instructions](../installation/helm.html) to learn more about using the Cerbos Helm chart. Deploy Cerbos as a service ==================== You can use the [Cerbos Helm chart](../installation/helm.html) to deploy Cerbos as a service inside your Kubernetes cluster. Refer to the [Helm chart instructions](../installation/helm.html) for more details. Deploy Cerbos as a sidecar ==================== The sidecar deployment model might be a preferrable option under the following circumstances: * You have a self-contained application that does not need to share policies with other applications in your environment. * You prefer to ship policy changes as application updates by bundling the two together. * You are concerned about network latency. Cerbos supports serving the API over a Unix domain socket. This allows your application container to securely communicate with the Cerbos service with no network overhead. Because the Cerbos server is only listening over a Unix domain socket, no other applications in your network will be able to communicate with it — thus providing secrecy as a bonus side effect. The following example illustrates a Kubernetes deployment with Cerbos as a sidecar. | | We are using [ghostunnel](https://github.com/ghostunnel/ghostunnel) as the application container for demonstration purposes only. In a real production deployment the Cerbos endpoint should not be exposed to the network. | | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ```yaml --- # Config map used to configure Cerbos. apiVersion: v1 kind: ConfigMap metadata: name: cerbos-sidecar-demo labels: app.kubernetes.io/name: cerbos-sidecar-demo app.kubernetes.io/component: cerbos app.kubernetes.io/version: "0.0.1" data: ".cerbos.yaml": |- server: # Configure Cerbos to listen on a Unix domain socket. httpListenAddr: "unix:/sock/cerbos.sock" storage: driver: disk disk: directory: /policies watchForChanges: false --- # Application deployment with Cerbos as a sidecar. # Note that in this example we are simply proxying requests received # by the main application (application container) to the Cerbos # sidecar (`cerbos` container) for demonstration purposes. In a real # production deployment the main application would not expose Cerbos # to the outside world at all. It would communicate with the Cerbos # sidecar privately to make policy decisions about the actions that # it is performing. # # Bonus: You can re-purpose this example to deploy Cerbos in an # environment that requires SPIFFE workload identities and/or # regular certificate rotation and access restrictions. See the # ghostunnel documentation at https://github.com/ghostunnel/ghostunnel # for more information. apiVersion: apps/v1 kind: Deployment metadata: name: cerbos-sidecar-demo labels: app.kubernetes.io/name: cerbos-sidecar-demo app.kubernetes.io/component: cerbos-sidecar-demo app.kubernetes.io/version: "0.0.1" spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: cerbos-sidecar-demo app.kubernetes.io/component: cerbos-sidecar-demo template: metadata: labels: app.kubernetes.io/name: cerbos-sidecar-demo app.kubernetes.io/component: cerbos-sidecar-demo spec: containers: ######################################################################## # Application container. Replace with your own application definition. # ######################################################################## - name: application image: "ghostunnel/ghostunnel" imagePullPolicy: IfNotPresent args: - "server" - "--listen=:3592" - "--target=unix:/sock/cerbos.sock" - "--cert=/certs/tls.crt" - "--key=/certs/tls.key" - "--disable-authentication" ports: - name: http containerPort: 3592 livenessProbe: httpGet: path: /_cerbos/health port: http scheme: HTTPS readinessProbe: httpGet: path: /_cerbos/health port: http scheme: HTTPS volumeMounts: # Mount the shared volume containing the socket - name: sock mountPath: /sock - name: certs mountPath: /certs ################## # Cerbos sidecar # ################## - name: cerbos image: "ghcr.io/cerbos/cerbos:0.45.1" imagePullPolicy: IfNotPresent args: - "server" - "--config=/config/.cerbos.yaml" - "--log-level=INFO" volumeMounts: # Mount the shared volume containing the socket - name: sock mountPath: /sock - name: config mountPath: /config readOnly: true - name: policies mountPath: /policies volumes: # Shared volume containing the socket. - name: sock emptyDir: {} - name: config configMap: name: cerbos-sidecar-demo - name: certs secret: secretName: cerbos-sidecar-demo - name: policies emptyDir: {} --- # Use cert-manager to issue a certificate to the application. apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: cerbos-sidecar-demo labels: app.kubernetes.io/name: cerbos-sidecar-demo app.kubernetes.io/component: cerbos-sidecar-demo app.kubernetes.io/version: "0.0.1" spec: isCA: true secretName: cerbos-sidecar-demo dnsNames: - cerbos-sidecar-demo.default.svc.cluster.local - cerbos-sidecar-demo.default.svc - cerbos-sidecar-demo.default - cerbos-sidecar-demo issuerRef: name: selfsigned-cluster-issuer kind: ClusterIssuer group: cert-manager.io ``` Deploy Cerbos to Serverless/FaaS environments ==================== ## [](#%5Faws%5Flambda)AWS Lambda You can deploy Cerbos to AWS Lambda by building a special container image that includes the Lambda runtime and the Cerbos binary. See for an example. The repository also contains an example of an AWS Lambda function that creates an AWS API Gateway endpoint to communicate with Cerbos over the HTTP protocol. Deploy Cerbos as a systemd service ==================== The [Cerbos Linux packages](../installation/binary.html#linux-packages) will automatically create a systemd service during installation. If you are using the tarballs to create a custom installation, you can modify the following sample systemd service definition to match your requirements. ```toml [Unit] Description=Cerbos Policy Decision Point [Service] ExecStart=/usr/local/bin/cerbos server --config=/etc/cerbos.yaml ProtectSystem=full ProtectHome=true PrivateUsers=true PrivateTmp=true DynamicUser=yes [Install] WantedBy=multi-user.target ``` Refer to [systemd documentation](https://www.freedesktop.org/software/systemd/man/systemd.exec.html) for more information about available configuration options. Why we built Cerbos this way ==================== Welcome! The purpose of this section is to give some insight into how decisions were made when designing and building Cerbos, for the more _curious_ of our users. * [Why Cerbos runs as a separate process](why%5Fcerbos%5Fruns%5Fas%5Fa%5Fseparate%5Fprocess.html) Why Cerbos runs as a separate process ==================== If you’re used to traditional authorization approaches, you’d be surprised to find that Cerbos is not a library that you can embed into your application. Instead, Cerbos is designed to be run as a sidecar or a service alongside your application. There are several reasons why we have chosen this approach. To provide a bit of background, let’s consider how modern software development works in the era of cloud native computing. Nowadays, the trend is towards microservice architectures where system functionality is split between multiple services that are fairly independent of each other. They are probably owned by different teams within the organization and even developed using different programming languages and tools. Automated CI/CD pipelines deploy new versions of these services many times a day. ![organization](_images/organisation.png) In these dynamic, polyglot environments, the emergent pattern for providing cross-cutting concerns such as service discovery, resiliency, observability and security in a standardized way is through the use of sidecars or other microservices. Frameworks such as [Dapr](https://dapr.io) and service meshes such as [Istio](https://istio.io) and [Linkerd](https://linkerd.io) are examples of software that employ this pattern. Authorization is one of those cross-cutting concerns that needs to be standardized and centrally managed across the organization. If authorization rules for the same resource are even slightly different between two services, then that creates a security issue. In a polyglot environment, the implementation of access rules would be duplicated between each programming language. This is a waste of effort and an inevitable source of inconsistencies and bugs due to how programmers interpret the specifications or how the particular programming language deals with certain data types or special cases. Changing access rules for the whole organization requires coordinated effort to develop, test and roll out those changes across the whole fleet. Debugging authorization problems in such an environment is quite difficult because no one has overall visibility of the whole system. The access logic is hidden away in code in multiple repositories. Unless the developers have been extremely disciplined, the quality of debugging aids such as traces, audit trails and tests would vary wildly as well. ## [](#%5Fenter%5Fcerbos)Enter Cerbos…​ Cerbos is designed to address most of the above problems: * Access policies are human-readable and stored in a central repository so that all stakeholders have visibility over the security rules implemented in their organization. * Logs, traces, metrics and audit trails are available out of the box and there are supplementary tools such as a policy testing framework, linter and a REPL for debugging issues. * Cerbos automatically detects changes to policies and updates itself on the fly. This makes rollout of access policy changes easy and almost instantaneous. For most changes, this means that the dependent services don’t need to have their code updated and rolled out to production. It saves development time and deployment headaches. * Offering Cerbos as a decoupled API allows it to be used by any application or service written in any language while providing a consistent experience across the board. Cerbos facilitates sharing access control logic across different services and applications and gets rid of inherent code duplication, inconsistent implementations, version drift and maintenance burden. By not having to worry about wrapping and shipping Cerbos features into language specific, embeddable libraries, we can focus our time and energy into optimizing the product and building new features using a smaller set of libraries and utilities provided by the language of our choice. We can test these features much more thoroughly because we have full control over all the integration points. We don’t have to be concerned about integrating or being compatible with an almost unlimited set of libraries and frameworks available for every programming language. And we don’t have to expend effort figuring out how to share common code across different languages, fight with language quirks and performance hotspots like foreign function interfaces and concurrency primitives. Glossary of Cerbos terms ==================== ACTION Any application-defined operation that could be performed on a `**RESOURCE**`. Actions could be coarse-grained like `create`, `update`, `delete`, `view` or fine-grained like `view:public`, `update:invoice_amt`. What actions are possible is determined by the application developers and they can use [Cerbos policies](../policies/index.html) to define the rules that must be satisfied in order for a given `**PRINCIPAL**` to perform one of those actions on a `**RESOURCE INSTANCE**`. ATTRIBUTE A piece of information about a `**PRINCIPAL**` or a `**RESOURCE INSTANCE**` that is useful for making an access decision. Cerbos is stateless and has no access to your application data. In order to make access decisions, Cerbos needs to know relevant information about the users and the objects they are trying to access. This information is sent as `attributes` in the Cerbos API request. For example, if you want to restrict users from a particular geography to access only objects from that same geography, you might define a Cerbos rule condition like `request.resource.attr.geography == request.principal.attr.geography`. Then, in the Cerbos API request, you must send the `geography` attribute (as determined by your application) for both the principal and the resource instance. CONDITION Cerbos policy rules can make dynamic, context-aware decisions by evaluating conditional logic against the `**ATTRIBUTES**` sent through the API request. See [Conditions](../policies/conditions.html) for details. DERIVED ROLE Most applications have a statically defined set of roles such as `admin`, `writer`, `employee` and so on. Cerbos derived roles are a feature in which these static roles can be dynamically augmented with context-awareness. For example, someone with the `employee` role can be augmented to `us_employee` by checking whether their location is in the USA. Cerbos policies can then be written to target the `us_employee` derived role instead of the `employee` role — which removes repetition of logic across policy files. See [Derived roles](../policies/derived%5Froles.html) for details. PDP Policy decision point. Essentially, where policies are executed and decisions are made. When you start a Cerbos server through one of the distribution artefacts (binary, container, Helm chart), you start a PDP. PRINCIPAL A human or an automated process that wants to perform one or more `**ACTIONS**` on one or more `**RESOURCE INSTANCES**`. Typically called a "user" in most settings but Cerbos uses the term `Principal` to avoid any ambiguity about whether the user is human or not. You can create [principal policies](../policies/principal%5Fpolicies.html) to create exceptions for particular users. RESOURCE A kind or a category of application objects with similar characteristics. The concept is very similar to a `class` in object-oriented programming. For example, in an inventory system, `Invoice` is a resource. Your system might have thousands or millions of invoices (`**RESOURCE INSTANCES**`: similar to `objects` in object-oriented programming) but there would only be a single Cerbos [resource policy](../policies/resource%5Fpolicies.html) for `Invoice` which encodes all the access rules. | | Sometimes the term resource is used to refer to a **RESOURCE INSTANCE** when the meaning is obvious from the context (the [API request fields](../api/index.html) are a good example of this). | | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | RESOURCE INSTANCE A specific item of a `**RESOURCE**` kind. If a `**RESOURCE**` is a `class`, a `**RESOURCE INSTANCE**` is an `object` of that class. For example, an invoice that was issued to "Acme Corp." with ID "I23456" is a `**RESOURCE INSTANCE**`. When making access decisions using Cerbos, you need to send information about the _resource instances_ to the Cerbos `CheckResources` API endpoint. The Cerbos `**PDP**` would then use the appropriate resource policy (determined by the `kind` specified in the resource instance) to process the information and make an access decision. Policy authoring ==================== ## [](#%5Ftips%5Ffor%5Fworking%5Fwith%5Fpolicies)Tips for working with policies * Policies can be in either YAML or JSON formats. Accepted file extensions are `.yml`, `.yaml` or `.json`. All other extensions are ignored. * The JSON schemas for Cerbos files are available at: * Policy * `` * [Test suite](compile.html#testing) * `` * [Principal test fixtures](compile.html#fixtures) * `` * [Resources test fixtures](compile.html#fixtures) * `` * [Auxiliary data test fixtures](compile.html#fixtures) * `` * If you prefer to always use the latest version, they can be accessed at: * `` * `` * `` * `` * `` ## [](#%5Fpolicy%5Fstructure)Policy structure * The policy header is common for all policy types: * `apiVersion`: Required. Must be `api.cerbos.dev/v1`. * `description`: Optional. Description of the policy. * `disabled`: Optional. Set to `true` to make the Cerbos engine ignore this policy file. * `metadata.sourceFile`: Optional. Set to the source of the policy data for auditing purposes. * `metadata.annotations`: Optional. Key-value pairs of strings holding free-form data for auditing purposes. * Resource names, actions, and principal names can be hierarchical. Use `:` as the delimiter. For example: `app:component:resource`. * Wildcard matches are allowed on certain fields. Wildcards respect the hierarchy delimiter `:`. * [Scoped policies](scoped%5Fpolicies.html) (optional) are handy for use cases like multi-tenancy where you may want to override particular rules for some tenants. * See [Conditions](conditions.html) to learn how to write conditions in policy rules. * See [Schemas](schemas.html) to learn how you can define schemas for validating requests. * See [Best practices](best%5Fpractices.html) to check out a growing collection of snippets detailing the optimal way to write policies. ## [](#%5Fediting%5Fpolicies)Editing policies The quickest and the easiest way to get familiar with Cerbos policies is to use the [online playground](https://play.cerbos.dev). It provides an IDE-like experience with an interactive editor, examples, code snippets, test cases and other useful utilities to help you design policies. ### [](#%5Feditor%5Fconfigurations)Editor configurations Editors with support for the [Language Server Protocol (LSP)](https://microsoft.github.io/language-server-protocol/) can make use of the [YAML language server](https://github.com/redhat-developer/yaml-language-server) implementation when working with Cerbos policies. Simply add the following line at the beginning of your policy file to get context-sensitive code suggestions and validation messages from the editor. ```yaml # yaml-language-server: $schema=https://api.cerbos.dev/latest/cerbos/policy/v1/Policy.schema.json ``` The same method can be used for [tests](compile.html#testing): ```yaml # yaml-language-server: $schema=https://api.cerbos.dev/latest/cerbos/policy/v1/TestSuite.schema.json ``` [Resource fixtures](compile.html#fixtures) for tests: ```yaml # yaml-language-server: $schema=https://api.cerbos.dev/latest/cerbos/policy/v1/TestFixture/Resources.schema.json ``` [Principal fixtures](compile.html#fixtures) for tests: ```yaml # yaml-language-server: $schema=https://api.cerbos.dev/latest/cerbos/policy/v1/TestFixture/Principals.schema.json ``` [Auxiliary data fixtures](compile.html#fixtures) for tests: ```yaml # yaml-language-server: $schema=https://api.cerbos.dev/latest/cerbos/policy/v1/TestFixture/AuxData.schema.json ``` YAML language server also supports per-directory settings. If all your Cerbos policies are contained in a specific directory, you can configure the editor to always use the correct schema for the YAML files in that directory. Refer to the [YAML language server documentation](https://github.com/redhat-developer/yaml-language-server#language-server-settings=) for more information. Example: Apply the schema to all files in the /cerbos directory ```yaml yaml.schemas: { "https://api.cerbos.dev/latest/cerbos/policy/v1/Policy.schema.json": "/cerbos/*", "https://api.cerbos.dev/latest/cerbos/policy/v1/TestSuite.schema.json": "/cerbos/**/*_test.yaml", "https://api.cerbos.dev/latest/cerbos/policy/v1/TestFixture/Resources.schema.json": "/cerbos/**/testdata/resources.yaml", "https://api.cerbos.dev/latest/cerbos/policy/v1/TestFixture/Principals.schema.json": "/cerbos/**/testdata/principals.yaml", "https://api.cerbos.dev/latest/cerbos/policy/v1/TestFixture/AuxData.schema.json": "/cerbos/**/testdata/auxdata.yaml" } ``` JSON files can specify the schema using the `$schema` top-level property. ```json "$schema": "https://api.cerbos.dev/latest/cerbos/policy/v1/Policy.schema.json", ``` #### [](#%5Fneovim)Neovim Refer to your plugin manager documentation to figure out how to install [nvim-lspconfig](https://github.com/neovim/nvim-lspconfig/tree/master) and [configure the yaml-language-server](https://github.com/neovim/nvim-lspconfig/blob/master/doc/server%5Fconfigurations.md#yamlls). Plugins such as [mason-lspconfig](https://github.com/williamboman/mason-lspconfig.nvim) can automatically download and install language servers as well. The following is an example of using [lazy.nvim](https://github.com/folke/lazy.nvim) and [mason.nvim](https://github.com/williamboman/mason.nvim) to install and configure yaml-language-server. It follows the [recommended way of configuring lazy plugins](https://github.com/folke/lazy.nvim#-structuring-your-plugins). \~/.config/nvim/lua/plugins/lspconfig.lua ```lua return { { "neovim/nvim-lspconfig", dependencies = { { "williamboman/mason.nvim", }, { "williamboman/mason-lspconfig.nvim", opts = { ensure_installed = { "yamlls" }, }, }, }, opts = { servers = { yamlls = { settings = { yaml = { schemas = { "https://api.cerbos.dev/latest/cerbos/policy/v1/Policy.schema.json": "/cerbos/*", "https://api.cerbos.dev/latest/cerbos/policy/v1/TestSuite.schema.json": "/cerbos/**/*_test.yaml", "https://api.cerbos.dev/latest/cerbos/policy/v1/TestFixture/Resources.schema.json": "/cerbos/**/testdata/resources.yaml", "https://api.cerbos.dev/latest/cerbos/policy/v1/TestFixture/Principals.schema.json": "/cerbos/**/testdata/principals.yaml", "https://api.cerbos.dev/latest/cerbos/policy/v1/TestFixture/AuxData.schema.json": "/cerbos/**/testdata/auxdata.yaml" }, }, }, }, }, }, } } ``` #### [](#%5Fjetbrains%5Fides)JetBrains IDEs Navigate to the **Preferences** **Languages & Frameworks** **Schemas and DTDs** **JSON Schema Mappings** in JetBrains IDE of your choice. Add an entry with the following configuration: Name: Cerbos Schema file or URL: https://api.cerbos.dev/latest/cerbos/policy/v1/Policy.schema.json Schema version: JSON Schema Version 7 File path pattern: cerbos/* ![JetBrains JSON Schema Mappings menu](_images/jetbrains-menu.png) | | In the example above, the schema is applied to all files in the cerbos directory. | | ------------------------------------------------------------------------------------ | #### [](#%5Fvisual%5Fstudio%5Fcode)Visual Studio Code If you are new to Visual Studio Code, refer to the[documentation](https://code.visualstudio.com/docs/getstarted/settings) for more information about how to change settings. Install the YAML language server extension from After the extension is installed, hit Ctrl+, or **File** **Preferences** **Settings** to edit settings. Expand **Extensions** **YAML**, click `Edit in settings.json` under `Yaml: Schemas`. and add the following snippet: ```json { "yaml.schemas": { "https://api.cerbos.dev/latest/cerbos/policy/v1/Policy.schema.json": "cerbos/*", "https://api.cerbos.dev/latest/cerbos/policy/v1/TestSuite.schema.json": "/cerbos/**/*_test.yaml", "https://api.cerbos.dev/latest/cerbos/policy/v1/TestFixture/Resources.schema.json": "/cerbos/**/testdata/resources.yaml", "https://api.cerbos.dev/latest/cerbos/policy/v1/TestFixture/Principals.schema.json": "/cerbos/**/testdata/principals.yaml", "https://api.cerbos.dev/latest/cerbos/policy/v1/TestFixture/AuxData.schema.json": "/cerbos/**/testdata/auxdata.yaml" } } ``` | | In the example above, the schema is applied to all files in the cerbos directory. | | ------------------------------------------------------------------------------------ | Best practices and recipes ==================== A collection of tips and code snippets designed to help you write cleaner, more optimised Cerbos policies. ## [](#%5Fmodelling%5Fpolicies)Modelling policies With Cerbos, access rules are always resource-oriented and the policies you write map to these resources within your system. A _resource_ can be anything, and the way you model your policies is up you — you can achieve the same logical outcome in numerous ways; action-led, role-led, attribute-led, or with combinations thereof. That said, some patterns will lend themselves more naturally to certain scenarios — let’s take a look at some different approaches. Consider this business model: | _Actions_ | _Roles_ | | | | | | --------- | ----------- | ----------- | ---- | --- | - | | IT\_ADMIN | JR\_MANAGER | SR\_MANAGER | USER | CFO | | | run | x | x | x | | | | view | x | x | x | x | x | | edit | x | x | | | | | save | x | x | | | | | share | x | x | x | | | Representing this as a resource policy could be achieved in a variety of ways. Let’s take a look at each: ### [](#%5Faction%5Fled)Action-led Here, we focus on an action, and list all the roles that can perform that action: ```yaml # Principals in the following three roles can perform the `run` action - actions: - "run" effect: EFFECT_ALLOW roles: - JR_MANAGER - SR_MANAGER - CFO # All principals can perform the `view` action - actions: - "view" effect: EFFECT_ALLOW roles: - ["*"] ``` This approach might be suitable if any of the following apply to your system: * Your roles are "similar" in what they can do like `JR_MANAGER` and `SR_MANAGER`; it’s likely that `JR_MANAGER` will have a subset of the permissions of `SR_MANAGER`. There will of course be duplication in either direction, but it’s often easier to reason about this from an action perspective. * You have "high-risk" actions — you want to be able to tell at a glance which roles have access to a particular action. The act of explicitly listing roles per action makes it much more difficult to accidentally give unwanted permissions to the wrong user. * You have a relatively high number of roles to a low number of actions. ### [](#%5Frole%5Fled)Role-led Alternatively, we can focus on a role, and list all the actions the role can perform: ```yaml # These three actions can be performed by principals in the `JR_MANAGER` role - actions: - "run" - "view" - "share" effect: EFFECT_ALLOW roles: - JR_MANAGER ``` You might opt for a role-led approach if: * You have distinct roles where it’s rare for your roles to share common actions. * You have a relatively low number of roles to a high number of actions. ### [](#%5Fhybrid)Hybrid Perhaps we want to use a combination of the two: ```yaml # Principals in the `SR_MANAGER` or `CFO` roles can perform all actions - actions: - "*" effect: EFFECT_ALLOW roles: - SR_MANAGER - CFO ``` This might apply if your scenario doesn’t strictly fall into one of the previous two sections; individually, or at all. ### [](#%5Fblanket%5Fallow%5Fgranular%5Fdeny)Blanket allow, granular deny We can opt to explicitly state which actions a user **cannot** do: ```yaml # Principals in the `JR_MANAGER` role can perform all actions, other than `edit` and `save` - actions: - "*" effect: EFFECT_ALLOW roles: - "JR_MANAGER" - actions: - "edit" - "save" effect: EFFECT_DENY roles: - "JR_MANAGER" ``` This would suit scenarios where a principal can perform _nearly_ every action, and you want to explicitly list disallowed actions. ### [](#%5Fattribute%5Fled)Attribute-led Consider the following hypothetical scenario: An organization models its resources as specific _data sets_. Each data set is unique, as are the principals trying to access them. The organization uses JWTs extensively to manage and transmit identity/contextual information. The resource policies map 1:1 to each data set, and access is governed by arbitrary information (in this case, passed within the JWT). Given the dynamic nature of audiences, it’s not practical to enumerate all roles that have access. What we could do instead is to globally allow all roles and actions and then determine access based on attributes passed in the JWT. Take a look at the following example policy: ```yaml apiVersion: api.cerbos.dev/v1 resourcePolicy: resource: "data_set" version: default rules: - actions: ["*"] roles: ["*"] effect: EFFECT_ALLOW condition: match: all: of: - expr: has(request.aux_data.jwt.aud) - expr: > "my.custom.audience" in request.aux_data.jwt.aud ``` In the above, we blanket-allow all actions and roles, but specifically rely on the `aud` key parsed from the JWT to determine access. ## [](#%5Fadding%5Fself%5Fservice%5Fcustom%5Froles)Adding self-service custom roles Imagine this scenario: you’re an admin in a multi-tenant system, and you want a method by which you can copy an existing role, and then select which permissions/actions to enable or disable for each. There are two ways of approaching this: ### [](#%5Fstatic%5Fpolicies%5Fdynamic%5Fcontext)Static Policies / Dynamic Context This is the _idiomatic_ way of solving this use-case in Cerbos. In the vast majority of cases, it is possible to have the policies statically defined and to pass in dynamic context as attributes of a principal. This dynamic context can be any arbitrary data such as the principal’s location, age, or specific roles it has within the context of an organizational unit (a department, a tenant or a project, for example). This contextual data would be retrieved at request time from another service or a data store. Let’s look at an example. Here is a resource policy for a resource of type `"workspace"`: workspace.yaml ```yaml apiVersion: "api.cerbos.dev/v1" resourcePolicy: version: "default" resource: "workspace" rules: - actions: - workspace:view - pii:view effect: EFFECT_ALLOW roles: - USER condition: match: expr: P.attr.workspaces[R.id].role == "OWNER" ``` Notice how the condition relies on context passed in within the `P.attr.workspaces` map, with the key being the resource ID, and the value being a predefined value `"OWNER"`. We can grant access to a principal with the `USER` role, by constructing the following request payload: * cURL * .NET * Go * Java * JS * PHP * Python * Ruby * Rust ```shell cat <() { { "workspaceA", AttributeValue.MapValue(new Dictionary() { {"role", AttributeValue.StringValue("OWNER")} }) }, { "workspaceB", AttributeValue.MapValue(new Dictionary() { {"role", AttributeValue.StringValue("MEMBER")} }) } })), ResourceAction.NewInstance("workspace", "workspaceA") .WithActions(actions), ResourceAction.NewInstance("workspace", "workspaceB") .WithActions(actions) ); foreach (string n in new string[] { "workspaceA", "workspaceB" }) { var r = result.Find(n); Console.Write(String.Format("\nResource: {0}\n", n)); foreach (var i in r.GetAll()) { String action = i.Key; Boolean isAllowed = i.Value; Console.Write(String.Format("\t{0} -> {1}\n", action, isAllowed ? "EFFECT_ALLOW" : "EFFECT_DENY")); } } } } ``` ```go package main import ( "context" "log" "github.com/cerbos/cerbos-sdk-go/cerbos" ) func main() { c, err := cerbos.New("localhost:3593", cerbos.WithPlaintext()) if err != nil { log.Fatalf("Failed to create client: %v", err) } principal := cerbos.NewPrincipal("123", "USER") // We use map[string]any as strictly typed nested maps aren't supported principal.WithAttr("workspaces", map[string]map[string]any{ "workspaceA": { "role": "OWNER", }, "workspaceB": { "role": "MEMBER", }, }) kind := "workspace" actions := []string{"workspace:view", "pii:view"} batch := cerbos.NewResourceBatch() batch.Add(cerbos.NewResource(kind, "workspaceA"), actions...) batch.Add(cerbos.NewResource(kind, "workspaceB"), actions...) resp, err := c.CheckResources(context.Background(), principal, batch) if err != nil { log.Fatalf("Failed to check resources: %v", err) } log.Printf("%v", resp) } ``` ```java package demo; import static dev.cerbos.sdk.builders.AttributeValue.mapValue; import static dev.cerbos.sdk.builders.AttributeValue.stringValue; import java.util.Map; import dev.cerbos.sdk.CerbosBlockingClient; import dev.cerbos.sdk.CerbosClientBuilder; import dev.cerbos.sdk.CheckResult; import dev.cerbos.sdk.builders.Principal; import dev.cerbos.sdk.builders.ResourceAction; public class App { public static void main(String[] args) throws CerbosClientBuilder.InvalidClientConfigurationException { CerbosBlockingClient client=new CerbosClientBuilder("localhost:3593").withPlaintext().buildBlockingClient(); for (String n : new String[]{"workspaceA", "workspaceB"}) { CheckResult cr = client.batch( Principal.newInstance("123", "USER") .withAttribute("workspaces", mapValue(Map.of( "workspaceA", mapValue(Map.of( "role", stringValue("OWNER") )), "workspaceB", mapValue(Map.of( "role", stringValue("MEMBER") )) ))) ) .addResources( ResourceAction.newInstance("workspace","workspaceA") .withActions("workspace:view", "pii:view"), ResourceAction.newInstance("workspace","workspaceB") .withActions("workspace:view", "pii:view") ) .check().find(n).orElse(null); if (cr != null) { System.out.printf("\nResource: %s\n", n); cr.getAll().forEach((action, allowed) -> { System.out.printf("\t%s -> %s\n", action, allowed ? "EFFECT_ALLOW" : "EFFECT_DENY"); }); } } } } ``` ```javascript const { GRPC: Cerbos } = require("@cerbos/grpc"); const cerbos = new Cerbos("localhost:3593", { tls: false }); (async() => { const kind = "workspace"; const actions = ["workspace:view", "pii:view"]; const cerbosPayload = { principal: { id: "123", roles: ["USER"], attributes: { workspaces: { workspaceA: { role: "OWNER", }, workspaceB: { role: "MEMBER", } }, }, }, resources: [ { resource: { kind: kind, id: "workspaceA", }, actions: actions, }, { resource: { kind: kind, id: "workspaceB", }, actions: actions, }, ], }; const decision = await cerbos.checkResources(cerbosPayload); console.log(decision.results) })(); ``` ```php build(); $principal = Principal::newInstance("123") ->withRole("USER") ->withAttribute("workspaces", [ "workspaceA" => [ "role" => "OWNER" ], "workspaceB" => [ "role" => "MEMBER" ] ]); $type = "workspace"; $resourceAction1 = ResourceAction::newInstance($type, "workspaceA") ->withAction("workspace:view") ->withAction("pii:view"); $resourceAction2 = ResourceAction::newInstance($type, "workspaceB") ->withAction("workspace:view") ->withAction("pii:view"); $checkResourcesResult = $client->checkResources($principal, array($resourceAction1, $resourceAction2), null, null); echo json_encode($checkResourcesResult, JSON_PRETTY_PRINT); ?> ``` ```python import json from cerbos.sdk.client import CerbosClient from cerbos.sdk.model import Principal, Resource, ResourceAction, ResourceList from fastapi import HTTPException, status principal = Principal( "123", roles=["USER"], attr={ "workspaces": { "workspaceA": { "role": "OWNER", }, "workspaceB": { "role": "MEMBER", } } } ) actions = ["workspace:view", "pii:view"] resource_list = ResourceList( resources=[ ResourceAction( Resource( "workspaceA", "workspace", ), actions=actions, ), ResourceAction( Resource( "workspaceB", "workspace", ), actions=actions, ), ], ) with CerbosClient(host="http://localhost:3592") as c: try: resp = c.check_resources(principal=principal, resources=resource_list) resp.raise_if_failed() except Exception: raise HTTPException( status_code=status.HTTP_403_FORBIDDEN, detail="Unauthorized" ) print(json.dumps(resp.to_dict(), sort_keys=False, indent=4)) ``` ```ruby # frozen_string_literal: true require "cerbos" require "json" client = Cerbos::Client.new("localhost:3593", tls: false) kind = "workspace" actions = ["workspace:view", "pii:view"] r1 = { kind: kind, id: "workspaceA" } r2 = { kind: kind, id: "workspaceB" } decision = client.check_resources( principal: { id: "123", roles: ["USER"], attributes: { workspaces: { workspaceA: { role: "OWNER" }, workspaceB: { role: "MEMBER" } } } }, resources: [ { resource: r1, actions: actions }, { resource: r2, actions: actions } ] ) puts JSON.pretty_generate({ results: [ { resource: r1, actions: { "workspace:view": decision.allow?(resource: r1, action: "workspace:view"), "pii:view": decision.allow?(resource: r1, action: "pii:view") } }, { resource: r2, actions: { "workspace:view": decision.allow?(resource: r2, action: "workspace:view"), "pii:view": decision.allow?(resource: r2, action: "pii:view") } } ] }) ``` ```rust use cerbos::sdk::attr::{attr, StructVal}; use cerbos::sdk::model::{Principal, Resource, ResourceAction, ResourceList}; use cerbos::sdk::{CerbosAsyncClient, CerbosClientOptions, CerbosEndpoint, Result}; #[tokio::main] async fn main() -> Result<()> { let opt = CerbosClientOptions::new(CerbosEndpoint::HostPort("localhost", 3593)).with_plaintext(); let mut client = CerbosAsyncClient::new(opt).await?; let principal = Principal::new("123", ["USER"]).with_attributes([attr( "workspaces", StructVal([ ("workspaceA", StructVal([("role", "OWNER")])), ("workspaceB", StructVal([("role", "MEMBER")])), ]), )]); let actions: [&str; 2] = ["workspace:view", "pii:view"]; let kind = "workspace"; let resp = client .check_resources( principal, ResourceList::new_from([ ResourceAction(Resource::new("workspaceA", kind), actions), ResourceAction(Resource::new("workspaceB", kind), actions), ]), None, ) .await?; println!("{:?}", resp.response); Ok(()) } ``` You can find a full (and extended) example of the above in our [SaaS Workspace Policy playground example](https://play.cerbos.dev/p/IJxlK6131f642ND65F1EhPmiT18Ap1A5). ### [](#%5Fdynamic%5Fpolicies)Dynamic Policies There might be circumstances where you want to create or update resources and actions on the fly; an example of this might be a multi-tenant platform that provides tenants the ability to manage their own policies. If this is the case, then you can use the [Admin API](../api/admin%5Fapi.html) configured alongside a mutable [database storage engine](../configuration/storage.html#sqlite3) to provide this functionality. This would be handled within your application layer, with the desired policy contents provided to the PDP via the API. For a full example implementation, check out [this demo](https://github.com/cerbos/demo-admin-api). ## [](#%5Fpolicy%5Frepository%5Flayout)Policy repository layout Cerbos expects the policy repository to have a particular directory layout. * The directory must only contain Cerbos policy files, policy test files and schemas. Any other YAML or JSON files will cause Cerbos to consider the policy repository as invalid. * If you use [schemas](schemas.html), the `_schemas` directory must be a top-level directory at the root of the policy repo. * All policy tests must have a file name ending in `_test` and a `.yaml`, `.yml` or `.json` extension. * Directories named `testdata` can be used to store test data for policy tests. Cerbos will not attempt to locate any policy files inside those directories. * Hidden files and directories (names starting with `.`) are ignored. A typical policy repository might resemble the following: . ├── _schemas │ ├── principal.json │ └── resources │ ├── leave_request.json │ ├── purchase_order.json │ └── salary_record.json ├── derived_roles │ ├── backoffice_roles.yaml │ └── common_roles.yaml ├── principal_policies │ └── auditor_audrey.yaml └── resource_policies ├── finance │ ├── purchase_order.yaml │ └── purchase_order_test.yaml └── hr ├── leave_request.yaml ├── leave_request_test.yaml ├── salary_record.yaml ├── salary_record_test.yaml └── testdata ├── auxdata.yaml ├── principals.yaml └── resources.yaml Validating and testing policies ==================== ## [](#%5Fvalidating%5Fpolicies)Validating policies You can use the Cerbos compiler to make sure that your policies are valid before pushing them to a production Cerbos instance. We recommend setting up a git hook or a CI step to run the Cerbos compiler before you push any policy changes to production. ```sh docker run -i -t -v /path/to/policy/dir:/policies ghcr.io/cerbos/cerbos:0.45.1 compile /policies ``` ## [](#testing)Testing policies You can write optional tests for policies and run them as part of the compilation stage to make sure that the policies do exactly what you expect. Tests are defined using the familiar YAML format as well. A test file must have `_test` suffix in the name and one of the following file extensions: `yaml`, `yml`, or `json`. For example, `album_test.yml`, `album_test.yaml` or `album_test.json`. Test suite definition ```yaml --- name: AlbumObjectTestSuite (1) description: Tests for verifying the album:object resource policy (2) options: now: "2022-08-02T15:00:00Z" (3) defaultPolicyVersion: staging (4) lenientScopeSearch: true (5) globals: (6) my_global_var: foo principals: (7) alicia: id: aliciaID roles: - user bradley: id: bradleyID roles: - user principalGroups: (8) everyone: principals: - alicia - bradley resources: (9) alicia_album: id: XX125 kind: album:object policyVersion: default attr: owner: aliciaID public: false flagged: false bradley_album: id: XX250 kind: album:object policyVersion: staging attr: owner: bradleyID public: false flagged: false resourceGroups: (10) all_albums: resources: - alicia_album - bradley_album auxData: (11) validJWT: jwt: iss: my.domain aud: ["x", "y"] myField: value tests: (12) - name: Accessing an album (13) options: (14) now: "2022-08-03T15:00:00Z" (15) defaultPolicyVersion: production (16) lenientScopeSearch: false (17) globals: (18) my_global_var: bar input: (19) principals: (20) - alicia - bradley resources: (21) - alicia_album - bradley_album actions: (22) - view - delete auxData: validJWT (23) expected: (24) - principal: alicia (25) resource: alicia_album (26) actions: (27) view: EFFECT_ALLOW delete: EFFECT_ALLOW outputs: (28) - action: view (29) expected: (30) - src: resource.album.vdefault#view-rule val: key1: value1 key2: ["value2", "value3"] - src: resource.album.vdefault#token-lifetime val: 1h - principal: bradley resource: bradley_album actions: view: EFFECT_ALLOW delete: EFFECT_ALLOW - name: Using groups input: principalGroups: (31) - everyone resourceGroups: (32) - all_albums actions: - download expected: - principalGroups: (33) - everyone resourceGroups: (34) - all_albums actions: download: EFFECT_DENY ``` | **1** | Name of the test suite | | ------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | **2** | Description of the test suite | | **3** | Optional RFC3339 timestamp to be used as the return value of the now function. Applies to all tests in the suite unless overridden locally. | | **4** | Optionally set [default policy version](../configuration/engine.html#default%5Fpolicy%5Fversion) for this test suite | | **5** | Optionally set [lenient scope search](../configuration/engine.html#lenient%5Fscopes) for this test suite | | **6** | Optionally set [globals](../configuration/engine.html#globals) for this test suite | | **7** | Map of principal fixtures. The key is a string that can be used to refer to the associated principal. | | **8** | Map of principal groups. The key is a string that can be used to refer to the associated group of principal fixtures. | | **9** | Map of resource fixtures. The key is a string that can be used to refer to the associated resource. | | **10** | Map of resource groups. The key is a string that can be used to refer to the associated group of resource fixtures. | | **11** | Map of (optional) auxiliary data fixtures required to evaluate some requests. The key is a string that can be used to refer to the associated auxData. | | **12** | List of tests in this suite | | **13** | Name of the test | | **14** | Optionally set options that apply to just this test. Test-specific options are not merged with suite-wide options, so any unspecified values revert to the default. | | **15** | Optional RFC3339 timestamp to be used as the return value of the now function. | | **16** | Optionally set [default policy version](../configuration/engine.html#default%5Fpolicy%5Fversion) for this test. | | **17** | Optionally set [lenient scope search](../configuration/engine.html#lenient%5Fscopes) for this test. | | **18** | Optionally set [globals](../configuration/engine.html#globals) for this test. | | **19** | Input to the policy engine | | **20** | List of keys of principal fixtures to test | | **21** | List of keys of resource fixtures to test | | **22** | List of actions to test | | **23** | Key of auxiliary data fixture to test (optional) | | **24** | List of outcomes expected for each principal and resource. If a principal+resource pair specified in input is not listed in expected, then EFFECT\_DENY is expected for all actions for that pair. | | **25** | Key of the principal fixture under test. Use principals instead of principal if you want to specify identical expectations for multiple principals. | | **26** | Key of the resource fixture under test. Use resources instead of resource if you want to specify identical expectations for multiple resources. | | **27** | Expected outcomes for each action for each principal+resource pair. If an action specified in input is not listed, then EFFECT\_DENY is expected for that action. | | **28** | Optional list of [output values](outputs.html) to match | | **29** | Name of the action that would produce the output | | **30** | List of expected output values | | **31** | List of keys of principal groups to test. You can provide this instead of, or as well as, principals. | | **32** | List of keys of resource groups to test. You can provide this instead of, or as well as, resources. | | **33** | Key of the principal group under test. You can provide this instead of, or as well as, principal or principals. | | **34** | Key of the resource group under test. You can provide this instead of, or as well as, resource or resources. | ### [](#fixtures)Sharing test fixtures It is possible to share principals, resources and auxData blocks between test suites stored in the same directory. Create a `testdata` directory in the directory containing your test suite files, then define shared resources, principals and auxData in `testdata/resources.yml`, `testdata/principals.yml`, `testdata/auxdata.yml` respectively (`yaml` and `json` extensions are also supported). tests ├── album_object_test.yaml ├── gallery_object_test.yaml ├── slideshow_object_test.yaml └── testdata ├── auxdata.yaml ├── principals.yaml └── resources.yaml An example of `testdata/principals.yml` ```yaml --- principals: # required john: id: johnID roles: - user - moderator principalGroups: # optional moderators: principals: - john ``` An example of `testdata/resources.yml` ```yaml --- resources: # required alicia_album: id: XX125 kind: "album:object" attr: owner: aliciaID public: false flagged: false resourceGroups: # optional all_albums: resources: - alicia_album ``` An example of `testdata/auxdata.yml` ```yaml --- auxData: # required validJWT: jwt: iss: my.domain aud: ["x", "y"] myField: value ``` | | [YAML anchors and overrides](https://www.educative.io/blog/advanced-yaml-syntax-cheatsheet#anchors) are a great way to reduce repetition and reuse definitions in test cases. For example, the following definitions are equivalent: Without anchors and overrides With anchors and overrides resources: alicias\_album1: id: "XX125" kind: "album:object" attr: owner: "alicia" public: false flagged: false alicias\_album2: id: "XX525" kind: "album:object" attr: owner: "alicia" public: false flagged: false alicias\_album3: id: "XX925" kind: "album:object" attr: owner: "alicia" public: false flagged: false resources: alicias\_album1: id: "XX125" kind: "album:object" attr: &alicia\_album\_attr owner: "alicia" public: false flagged: false alicias\_album2: id: "XX525" kind: "album:object" attr: <<: \*alicia\_album\_attr alicias\_album3: id: "XX925" kind: "album:object" attr: <<: \*alicia\_album\_attr | | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ### [](#%5Frunning%5Ftests)Running tests The `compile` command automatically discovers test files in the policy repository. ```sh docker run -i -t \ -v /path/to/policy/dir:/policies \ ghcr.io/cerbos/cerbos:0.45.1 compile /policies ``` The output format can be controlled using the `--output` flag, which accepts the values `tree` (default), `list` and `json`. The `--color` flag controls the coloring of the output. To produce machine readable output from the tests, pass `--output=json --color=never` to the command. By default, all discovered tests are run. Use the `--skip-tests` flag to skip all tests or use the `--run` flag to run a set of tests that match a regular expression. Example: Running only tests that contain 'Delete' in the name ```sh docker run -i -t \ -v /path/to/policy/dir:/policies \ ghcr.io/cerbos/cerbos:0.45.1 compile --run=Delete /policies ``` You can mark entire suites or individual tests in a suite with `skip: true` to skip them during test runs. Example: Skipping a test ```yaml --- name: AlbumObjectTestSuite description: Tests for verifying the album:object resource policy tests: - name: View private album skip: true skipReason: "Policy under review" input: principals: ["alicia"] resources: ["alicia_private_album"] actions: ["view"] expected: - principal: alicia resource: alicia_private_album actions: view: EFFECT_ALLOW ``` ## [](#ci-environments)Validating and testing policies in CI environments Because Cerbos artefacts are distributed as self-contained containers and binaries, you should be able to easily integrate Cerbos into any CI environment. Simply configure your workflow to execute the commands described in the sections above using either the Cerbos container (you may need to configure mount points to suit your repo structure) or the binary. ### [](#%5Fgithub%5Factions)GitHub Actions * [cerbos-setup-action](https://github.com/cerbos/cerbos-setup-action): Install `cerbos` and `cerbosctl` binaries into your workflow tools cache * [cerbos-compile-action](https://github.com/cerbos/cerbos-compile-action): Compile and (optionally) test Cerbos policies Example workflow ```yaml --- name: PR Check on: pull_request: branches: - main jobs: cerbosCheck: name: Check Cerbos policies runs-on: ubuntu-latest steps: - name: Checkout code uses: actions/checkout@v2 - name: Setup Cerbos uses: cerbos/cerbos-setup-action@v1 with: version: latest - name: Compile and test policies uses: cerbos/cerbos-compile-action@v1 with: policyDir: policies ``` See for an example of Cerbos GitHub Actions being used in a workflow. ### [](#%5Fgitlab%5Fci)GitLab CI Example pipeline ```yaml --- stages: - prepare - compile download-cerbos: stage: prepare script: - curl https://github.com/cerbos/cerbos/releases/download/v0.45.1/cerbos_0.45.1_Linux_x86_64.tar.gz -L --output /tmp/cerbos.tar.gz - tar -xf /tmp/cerbos.tar.gz -C ./ - chmod +x ./cerbos artifacts: paths: - cerbos compile-job: stage: compile dependencies: ["download-cerbos"] script: - ./cerbos compile ./policies ``` ### [](#%5Fdagger)Dagger The [Dagger](https://dagger.io) Cerbos module can be installed by running `dagger install github.com/cerbos/dagger-cerbos`. This module provides a `compile` function for compiling and testing Cerbos policy repositories and a `server` service for starting a Cerbos server. ```yaml # Compile and run tests on a policy repository dagger -m github.com/cerbos/dagger-cerbos call compile --policy-dir=./cerbos # Start a Cerbos server with the default disk driver dagger -m github.com/cerbos/dagger-cerbos call server --policy-dir=./cerbos up # Start a Cerbos server instance configured to use an in-memory SQLite policy repository dagger -m github.com/cerbos/dagger-cerbos call server --config=storage.driver=sqlite3,storage.sqlite3.dsn=:memory:,server.adminAPI.enabled=true up # View usage information dagger -m github.com/cerbos/dagger-cerbos call compile --help dagger -m github.com/cerbos/dagger-cerbos call server --help ``` Conditions ==================== A powerful feature of Cerbos policies is the ability to define conditions that are evaluated against the data provided in the request. Conditions are written using the [Common Expression Language (CEL)](https://github.com/google/cel-spec/blob/master/doc/intro.md). | | Cerbos ships with an interactive REPL that can be used to experiment with writing CEL conditions. It can be started by running cerbos repl. See [the REPL documentation](../cli/cerbos.html#repl) for more information. | | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | Every condition expression must evaluate to a boolean true/false value. A condition block in a policy can contain either a single condition expression, or multiple expressions combined using the `all`, `any`, or `none` operators. These logical operators may be nested. Condition block ```yaml condition: match: all: of: - expr: request.resource.attr.status == "PENDING_APPROVAL" - expr: > "GB" in request.resource.attr.geographies ``` ## [](#top%5Flevel%5Fidentifiers)Top-level identifiers Within a condition expression, you have access to several top-level identifiers: `request` Data provided in the check or plan request (principal, resource, and auxiliary data). `runtime` Additional data computed while evaluating the policy. `variables` Variables declared in the [variables section of the policy](variables.html#variables). `constants` Variables declared in the [constants section of the policy](variables.html#constants). `globals` Global variables declared in the [policy engine configuration](../configuration/engine.html#globals). There are also single-letter aliases available to allow you to write terser expressions: `P` `request.principal` `R` `request.resource` `V` `variables` `C` `constants` `G` `globals` The `request` object ```yaml request: principal: (1) id: alice (2) roles: (3) - employee attr: (4) geography: GB resource: (5) kind: leave_request (6) id: XX125 (7) attr: (8) owner: alice auxData: (9) jwt: (10) iss: acme.corp ``` | **1** | The principal whose permissions are being checked. | | ------ | ----------------------------------------------------------------------------------- | | **2** | ID of the principal. | | **3** | Static roles that are assigned to the principal by your identity management system. | | **4** | Free-form context data about the principal. | | **5** | The resource on which the principal is performing actions. | | **6** | Resource kind. | | **7** | ID of the resource instance. | | **8** | Free-form context data about the resource instance. | | **9** | [Auxiliary data sources](../configuration/auxdata.html). | | **10** | JWT claims. | The `runtime` object ```yaml runtime: effectiveDerivedRoles: (1) - owner - gb_employee ``` | **1** | [Derived roles](derived%5Froles.html) that were assigned to to the principal by Cerbos while evaluating the policy. This is only populated in expressions in resource policies, and only includes derived roles that are referenced in at least one policy rule. | | ----- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ## [](#%5Fexpressions%5Fand%5Fblocks)Expressions and blocks Single boolean expression ```yaml condition: match: expr: P.id.matches("^dev_.*") ``` `all` operator: all expressions must evaluate to true (logical AND) ```yaml condition: match: all: of: - expr: R.attr.status == "PENDING_APPROVAL" - expr: > "GB" in R.attr.geographies - expr: P.attr.geography == "GB" ``` `any` operator: only one of the expressions has to evaluate to true (logical OR) ```yaml condition: match: any: of: - expr: R.attr.status == "PENDING_APPROVAL" - expr: > "GB" in R.attr.geographies - expr: P.attr.geography == "GB" ``` `none` operator: none of the expressions should evaluate to true (logical negation) ```yaml condition: match: none: of: - expr: R.attr.status == "PENDING_APPROVAL" - expr: > "GB" in R.attr.geographies - expr: P.attr.geography == "GB" ``` Nesting operators ```yaml condition: match: all: of: - expr: R.attr.status == "DRAFT" - any: of: - expr: R.attr.dev == true - expr: R.attr.id.matches("^[98][0-9]+") - none: of: - expr: R.attr.qa == true - expr: R.attr.canary == true ``` The above nested block is equivalent to the following: ```yaml condition: match: expr: > (R.attr.status == "DRAFT" && (R.attr.dev == true || R.attr.id.matches("^[98][0-9]+")) && !(R.attr.qa == true || R.attr.canary == true)) ``` Quotes in expressions Single and double quotes have special meanings in YAML. To avoid parsing errors when your expression contains quotes, use the YAML block scalar syntax or wrap the expression in parentheses. ```yaml expr: > "GB" in R.attr.geographies ``` ```yaml expr: ("GB" in R.attr.geographies) ``` ## [](#%5Fpolicy%5Fvariables)Policy variables To avoid duplication in condition expressions, you can define [variables and constants in policies](variables.html). ## [](#auxdata)Auxiliary data If you have [auxiliary data sources configured](../configuration/auxdata.html), they can be accessed using `request.auxData`. Accessing JWT claims ```yaml "cerbie" in request.auxData.jwt.aud && request.auxData.jwt.iss == "cerbos" ``` ## [](#%5Foperators)Operators | | CEL has many builtin functions and operators. The fully up-to-date list can be found at . | | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | Operator | Description | | -------- | -------------------------------- | | ! | Logical negation (NOT) | | \- | Subtraction/numeric negation | | != | Unequals | | % | Modulo | | && | Logical AND | | \|| | Logical OR | | \* | Multiplication | | + | Addition/concatenation | | / | Division | | <= | Less than or equal to | | < | Less than | | \== | Equals | | \>= | Greater than or equal to | | \> | Greater than | | in | Membership in lists or maps | | ? : | Ternary condition (if-then-else) | ## [](#%5Fdurations)Durations | | Duration values must be specified in one of the following units. Larger units like days, weeks or years are not supported because of ambiguity around their meaning due to factors such as daylight saving time transitions. Suffix Unit ns Nanoseconds us Microseconds ms Milliseconds s Seconds m Minutes h Hours | | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | Test data ```json ... "resource": { "kind": "leave_request", "attr": { "cooldownPeriod": "3750s", "lastAccessed": "2021-04-20T10:00:20.021-05:00" } } ... ``` | Function | Description | Example | | --------------- | ----------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------ | | duration | Convert a string to a duration. The string must contain a valid duration suffixed by one of ns, us, ms, s, m or h. E.g. 3750s | duration(R.attr.cooldownPeriod).getSeconds() == 3750 | | getHours | Get hours from a duration | duration(R.attr.cooldownPeriod).getHours() == 1 | | getMilliseconds | Get milliseconds from a duration | duration(R.attr.cooldownPeriod).getMilliseconds() == 3750000 | | getMinutes | Get minutes from a duration | duration(R.attr.cooldownPeriod).getMinutes() == 62 | | getSeconds | Get seconds from a duration | duration(R.attr.cooldownPeriod).getSeconds() == 3750 | | timeSince | Time elapsed since the given timestamp to current time on the server. This is a Cerbos extension to CEL | timestamp(R.attr.lastAccessed).timeSince() > duration("1h") | ## [](#hierarchies)Hierarchies | | The hierarchy functions are Cerbos-specific extensions to CEL. | | ----------------------------------------------------------------- | Test data ```json ... "principal": { "id": "john", "roles": ["employee"], "attr": { "scope": "foo.bar.baz.qux", } }, "resource": { "kind": "leave_request", "attr": { "scope": "foo.bar", } } ... ``` | Function | Description | Example | | ----------------- | ------------------------------------------------------------------------------------ | ----------------------------------------------------------------------------------------------------------------------------------- | | hierarchy | Convert a dotted string or a string list to a hierarchy | hierarchy("a.b.c") == hierarchy(\["a","b","c"\]) | | hierarchy | Convert a delimited string representation to a hierarchy | hierarchy("a:b:c", ":").size() == 3 | | ancestorOf | Returns true if the first hierarchy shares a common prefix with the second hierarchy | hierarchy("a.b").ancestorOf(hierarchy("a.b.c.d")) == true | | commonAncestors | Returns the common ancestor hierarchy | hierarchy(R.attr.scope).commonAncestors(hierarchy(P.attr.scope)) == hierarchy("foo.bar") | | descendentOf | Mirror function of ancestorOf | hierarchy("a.b.c.d").descendentOf(hierarchy("a.b")) == true | | immediateChildOf | Returns true if the first hierarchy is a first-level child of the second hierarchy | hierarchy("a.b.c").immediateChildOf(hierarchy("a.b")) == true && hierarchy("a.b.c.d").immediateChildOf(hierarchy("a.b")) == false | | immediateParentOf | Mirror function of immediateChildOf | hierarchy("a.b").immediateParentOf(hierarchy("a.b.c")) == true && hierarchy("a.b").immediateParentOf(hierarchy("a.b.c.d")) == false | | overlaps | Returns true if one of the hierarchies is a prefix of the other | hierarchy("a.b.c").overlaps(hierarchy("a.b.c.d.e")) == true && hierarchy("a.b.x").overlaps(hierarchy("a.b.c.d.e")) == false | | siblingOf | Returns true if both hierarchies share the same parent | hierarchy("a.b.c").siblingOf(hierarchy("a.b.d")) == true | | size | Returns the number of levels in the hierarchy | hierarchy("a.b.c").size() == 3 | | \[\] | Access a level in the hierarchy | hierarchy("a.b.c.d")\[1\] == "b" | ## [](#%5Fip%5Faddresses)IP addresses | | The IP address functions are Cerbos-specific extensions to CEL. | | ------------------------------------------------------------------ | Test data ```json ... "principal": { "id": "elmer_fudd", "attr": { "ipv4Address": "192.168.0.10", "ipv6Address": "2001:0db8:0000:0000:0000:0000:1000:0000" } } ... ``` | Function | Description | Example | | ------------- | ---------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------- | | inIPAddrRange | Check whether the IP address is in the range defined by the CIDR | P.attr.ipv4Address.inIPAddrRange("192.168.0.0/24") && P.attr.ipv6Address.inIPAddrRange("2001:db8::/48") | ## [](#%5Flists%5Fand%5Fmaps)Lists and maps Test data ```json ... "principal": { "id": "elmer_fudd", "attr": { "id": "125", "teams": ["design", "communications", "product", "commercial"], "limits": { "design": 10, "product": 25 }, "clients": { "acme": {"active": true}, "bb inc": {"active": true} } } } ... ``` | Operator/Function | Description | Example | | ----------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------- | | + | Concatenates lists | P.attr.teams + \["design", "engineering"\] | | \[\] | Index into a list or a map | P.attr.teams\[0\] == "design" && P.attr.clients\["acme"\]\["active"\] == true | | all | Check whether all elements in a list match the predicate. | P.attr.teams.all(t, size(t) > 3) && \[1, 2, 3\].all(i, j, i < j) | | distinct | Returns the distinct elements of a list | \[1, 2, 2, 3, 3, 3\].distinct() == \[1, 2, 3\] | | except | Produces the set difference of two lists | P.attr.teams.except(\["design", "engineering"\]) == \["communications", "product", "commercial"\] | | exists | Check whether at least one element matching the predicate exists in a list or map. | P.attr.teams.exists(t, t.startsWith("comm")) && P.attr.limits.exists(k, v, k == "design" && v > 0) | | exists\_one | Check that only one element matching the predicate exists. | P.attr.teams.exists\_one(t, t.startsWith("comm")) == false && P.attr.limits.exists\_one(k, v, k == "design" && v > 0) == false | | filter | Filter a list using the predicate. | size(P.attr.teams.filter(t, t.matches("^comm"))) == 2 | | flatten | Flattens a list. If an optional depth is provided, the list is flattened to the specified level | \[1,2,\[\],\[\],\[3,4\]\].flatten() == \[1, 2, 3, 4\] && \[1,\[2,\[3,\[4\]\]\]\].flatten(2) == \[1, 2, 3, \[4\]\] | | hasIntersection | Checks whether the lists have at least one common element | hasIntersection(\["design", "engineering"\], P.attr.teams) | | in | Check whether the given element is contained in the list or map | ("design" in P.attr.teams) && ("acme" in P.attr.clients) | | intersect | Produces the set intersection of two lists | intersect(\["design", "engineering"\], P.attr.teams) == \["design"\] | | isSubset | Checks whether the list is a subset of another list | \["design", "engineering"\].isSubset(P.attr.teams) == false | | lists.range | Returns a list of integers from 0 to n-1 | lists.range(5) == \[0, 1, 2, 3, 4\] | | map | Transform each element in a list | "DESIGN" in P.attr.teams.map(t, t.upperAscii()) | | reverse | Returns the elements of a list in reverse order | \[5, 3, 1, 2\].reverse() == \[2, 1, 3, 5\] | | size | Number of elements in a list or map | size(P.attr.teams) == 4 && size(P.attr.clients) == 2 | | slice | Returns a new sub-list using the indexes provided | \[1,2,3,4\].slice(1, 3) == \[2, 3\] | | sort | Sorts a list with comparable elements | \[3, 2, 1\].sort() == \[1, 2, 3\] | | sortBy | Sorts a list by a key value, i.e., the order is determined by the result of an expression applied to each element of the list | \[{ "name": "foo", "score": 0 },{ "name": "bar", "score": -10 },{ "name": "baz", "score": 1000 }\].sortBy(e, e.score).map(e, e.name) == \["bar", "foo", "baz"\] | | transformList | Converts a map or a list into a list value. The output expression determines the contents of the output list. Elements in the list may optionally be filtered | \[1, 2, 3\].transformList(i, v, i > 0, 2 \* v) == \[4, 6\] &&\[1, 2, 3\].transformList(i, v, 2 \* v) == \[2, 4, 6\] | | transformMap | Converts a map or a list into a map value. The key remains unchanged and only the value is changed. | \[1, 2, 3\].transformMap(i, v, i > 0, 2 \* v) == {1: 4, 2: 6} | | transformMapEntry | Converts a map or a list into a map value; however, this transform expects the entry expression be a map literal. Elements in the map may optionally be filtered | {'greeting': 'hello'}.transformMapEntry(k, v, {v: k}) == {'hello': 'greeting'} | ## [](#%5Fmath)Math | Function | Description | Example | | ------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------ | | math.abs | Returns the absolute value of the numeric type provided as input | math.abs(1.2) == 1.2 && math.abs(-2) == 2 | | math.bitAnd | Performs a bitwise-AND operation over two int or uint values | math.bitAnd(3u, 2u) == 2u && math.bitAnd(3, 5) == 1 && math.bitAnd(-3, -5) == -7 | | math.bitNot | Function which accepts a single int or uint and performs a bitwise-NOT ones-complement of the given binary value | math.bitNot(1) == -1 && math.bitNot(-1) == 0 && math.bitNot(0u) == 18446744073709551615u | | math.bitOr | Performs a bitwise-OR operation over two int or uint values | math.bitOr(1u, 2u) == 3u && math.bitOr(-2, -4) == -2 | | math.bitShiftLeft | Perform a left shift of bits on the first parameter, by the amount of bits specified in the second parameter. The first parameter is either a uint or an int. The second parameter must be an int | math.bitShiftLeft(1, 2) == 4 && math.bitShiftLeft(-1, 2) == -4 && math.bitShiftLeft(1u, 2) == 4u && math.bitShiftLeft(1u, 200) == 0u | | math.bitShiftRight | Perform a right shift of bits on the first parameter, by the amount of bits specified in the second parameter. The first parameter is either a uint or an int. The second parameter must be an int | math.bitShiftRight(1024, 2) == 256 && math.bitShiftRight(1024u, 2) == 256u && math.bitShiftLeft(1024u, 64) == 0u | | math.bitXor | Performs a bitwise-XOR operation over two int or uint values | math.bitXor(3u, 5u) == 6u && math.bitXor(1, 3) == 2 | | math.ceil | Compute the ceiling of a double value | math.ceil(1.2) == 2.0 && math.ceil(-1.2) == -1.0 | | math.floor | Compute the floor of a double value | math.floor(1.2) == 1.0 && math.floor(-1.2) == -2.0 | | math.greatest | Get the greatest valued number present in the arguments | math.greatest(\[1, 3, 5\]) == 5 && math.greatest(1, 3, 5) == 5 | | math.isFinite | Returns true if the value is a finite number | !math.isFinite(0.0/0.0) && math.isFinite(1.2) | | math.isInf | Returns true if the input double value is -Inf or +Inf | math.isInf(1.0/0.0) && !math.isInf(1.2) | | math.isNaN | Returns true if the input double value is NaN, false otherwise | math.isNaN(0.0/0.0) && !math.isNaN(1.2) | | math.least | Get the least valued number present in the arguments | math.least(\[1, 3, 5\]) == 1 && math.least(1, 3, 5) == 1 | | math.round | Rounds the double value to the nearest whole number with ties rounding away from zero, e.g. 1.5 → 2.0, -1.5 → -2.0 | math.round(1.2) == 1.0 && math.round(1.5) == 2.0 && math.round(-1.5) == -2.0 | | math.sign | Returns the sign of the numeric type, either -1, 0, 1 | math.sign(1.2) == 1.0 && math.sign(-2) == -1 && math.sign(0) == 0 | | math.trunc | Truncates the fractional portion of the double value | math.trunc(1.2) == 1.0 && math.trunc(-1.2) == -1.0 | ## [](#spiffe)SPIFFE | | The SPIFFE functions are Cerbos-specific extensions to CEL. | | -------------------------------------------------------------- | Test data ```json ... "principal": { "id": "spiffe://cerbos.dev/ns/privileged/sa/curl", "roles": ["api"], } ... ``` | Function | Description | Example | | ---------------------- | -------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------- | | spiffeID.isMemberOf | Check whether the ID belongs to given trust domain | spiffeID(P.id).isMemberOf(spiffeTrustDomain("spiffe://cerbos.dev")) | | spiffeID.path | Get the path element of ID | spiffeID(P.id).path() == "/ns/privileged/sa/curl" | | spiffeID.trustDomain | Get the trust domain of ID | spiffeID(P.id).trustDomain() == spiffeTrustDomain("spiffe://cerbos.dev") | | spiffeMatchAny | Match any SPIFFE ID | spiffeMatchAny().matchesID(spiffeID(P.id)) == true | | spiffeMatchExact | Match a single SPIFFE ID | spiffeMatchExact(spiffeID("spiffe://cerbos.dev/ns/privileged/sa/curl")).matchesID(spiffeID(P.id)) == true | | spiffeMatchOneOf | Match any one of SPIFFE IDs | spiffeMatchOneOf(\["spiffe://cerbos.dev/ns/privileged/sa/curl", "spiffe://cerbos.dev/ns/privileged/sa/foo"\]).matchesID(spiffeID(P.id)) == true | | spiffeMatchTrustDomain | Match any ID from the trust domain | spiffeMatchTrustDomain(spiffeTrustDomain("spiffe://cerbos.dev")).matchesID(spiffeID(P.id)) == true | | spiffeTrustDomain.id | Fully qualified trust domain ID | spiffeTrustDomain("cerbos.dev").id() == "spiffe://cerbos.dev" | | spiffeTrustDomain.name | Name of trust domain | spiffeTrustDomain("spiffe://cerbos.dev").name() == "cerbos.dev" | ## [](#%5Fstrings)Strings Test data ```json ... "resource": { "kind": "leave_request", "attr": { "id": "125", "department": "marketing" } } ... ``` | Function | Description | Example | | ------------- | ----------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------- | | base64.encode | Encode as base64 | base64.encode(bytes("hello")) == "aGVsbG8=" | | base64.decode | Decode base64 | base64.decode("aGVsbG8=") == bytes("hello") | | charAt | Get the character at given index | R.attr.department.charAt(1) == 'a' | | contains | Check whether a string contains the given substring | R.attr.department.contains("arket") | | endsWith | Check whether a string has the given suffix | R.attr.department.endsWith("ing") | | format | Format a string with the given arguments | "department\_%s\_%d".format(\["marketing", 1\]) | | indexOf | Index of the first occurrence of the given character | R.attr.department.indexOf('a') == 1 | | lastIndexOf | Index of the last occurrence of the given character | R.attr.department.lastIndexOf('g') == 8 | | lowerAscii | Convert ASCII characters to lowercase | "MARKETING".lowerAscii() == R.attr.department | | matches | Check whether a string matches a [RE2](https://github.com/google/re2/wiki/Syntax) regular expression | R.attr.department.matches("^\[mM\].\*g$") | | replace | Replace all occurrences of a substring | R.attr.department.replace("market", "engineer") == "engineering" | | replace | Replace with limits. Limit 0 replaces nothing, -1 replaces all. | "engineering".replace("e", "a", 1) == "angineering" && "engineering".replace("e", "a", -1) == "anginaaring" | | size | Get the length of the string | size(R.attr.department) == 9 | | split | Split a string using a delimiter | "a,b,c,d".split(",")\[1\] == "b" | | split | Split a string with limits. Limit 0 returns an empty list, 1 returns a list containing the original string. | "a,b,c,d".split(",", 2)\[1\] == "b,c,d" | | startsWith | Check whether a string has the given prefix | R.attr.department.startsWith("mark") | | substring | Selects a substring from the string | R.attr.department.substring(4) == "eting" && R.attr.department.substring(4, 6) == "et" | | trim | Remove whitespace from beginning and end | " marketing ".trim() == "marketing" | | upperAscii | Convert ASCII characters to uppercase | R.attr.department.upperAscii() == "MARKETING" | ## [](#%5Ftimestamps)Timestamps | | All timestamp getters (getHours, getMinutes, getDayOfWeek, and similar) take a time zone parameter. If omitted, the 'UTC' time zone is used by default. | | ---------------------------------------------------------------------------------------------------------------------------------------------------------- | Test data ```json ... "resource": { "kind": "leave_request", "attr": { "lastAccessed": "2021-04-20T10:00:20.021-05:00", "lastUpdateTime": "2021-05-01T13:34:12.024Z", } } ... ``` | Function | Description | Example | | --------------- | ------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------- | | timestamp | Convert an RFC3339 formatted string to a timestamp | timestamp(R.attr.lastAccessed).getFullYear() == 2021 | | getDate | Get day of month from a timestamp | timestamp(R.attr.lastAccessed).getDate() == 20 | | getDayOfMonth | Get day of month from a timestamp. Returns a zero-based value | timestamp(R.attr.lastAccessed).getDayOfMonth() == 19 | | getDayOfWeek | Get day of week from a timestamp. Returns a zero-based value where Sunday is 0 | timestamp(R.attr.lastAccessed).getDayOfWeek() == 2 | | getDayOfYear | Get day of year from a timestamp. Returns a zero-based value | timestamp(R.attr.lastAccessed).getDayOfYear() == 109 | | getFullYear | Get full year from a timestamp | timestamp(R.attr.lastAccessed).getFullYear() == 2021 | | getHours | Get hours from a timestamp | timestamp(R.attr.lastAccessed).getHours("-05:00") == 10 | | getMilliseconds | Get milliseconds from a timestamp | timestamp(R.attr.lastAccessed).getMilliseconds() == 21 | | getMinutes | Get minutes from a timestamp | timestamp(R.attr.lastAccessed).getMinutes("UTC") == 5 | | getMonth | Get month from a timestamp. Returns a zero-based value where January is 0 | timestamp(R.attr.lastAccessed).getMonth("NZ") == 3 | | getSeconds | Get seconds from a timestamp | timestamp(R.attr.lastAccessed).getSeconds() == 20 | | now | Current time on the server. This is a Cerbos extension to CEL | now() > timestamp(R.attr.lastAccessed) | | timeSince | Time elapsed since the given timestamp to current time on the server. This is a Cerbos extension to CEL | timestamp(R.attr.lastAccessed).timeSince() > duration("1h") | Example: Assert that more than 36 hours has elapsed between last access time and last update time ```yaml timestamp(R.attr.lastUpdateTime) - timestamp(R.attr.lastAccessed) > duration("36h") ``` Example: Add a duration to a timestamp ```yaml timestamp(R.attr.lastUpdateTime) + duration("24h") == timestamp("2021-05-02T13:34:12.024Z") ``` Derived roles ==================== Traditional RBAC roles are usually broad groupings with no context awareness. They are static and they are provided by the Identity Provider(IDP), not by Cerbos. Cerbos provides derived roles as a way of augmenting those broad roles with contextual data to provide more fine-grained control at runtime. For example, a person with the broad `manager` role can be augmented to `manager_of_scranton_branch` by taking into account the geographic location (or another factor) and giving that derived role bearer extra privileges on resources that belong to the Scranton branch. | | Derived roles are dynamically determined at runtime by matching the principal’s roles sent in the [API request](../api/index.html#check-resources) to the parentRoles specified in the derived roles definitions. Don’t use the derived role names as roles in the API request as Cerbos only expects that field to contain "normal" roles. | | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ```yaml --- apiVersion: "api.cerbos.dev/v1" description: |- Common dynamic roles used within the Apatr app derivedRoles: name: apatr_common_roles (1) constants: import: (2) - apatr_common_constants local: (3) corporate_network_ip_range: 10.20.0.0/16 variables: import: (4) - apatr_common_variables local: (5) flagged_resource: request.resource.attr.flagged definitions: - name: owner (6) parentRoles: ["user"] (7) condition: (8) match: expr: request.resource.attr.owner == request.principal.id - name: abuse_moderator parentRoles: ["moderator"] condition: match: expr: variables.flagged_resource - name: corporate_user parentRoles: ["user"] condition: match: expr: request.principal.attr.ip_address.inIPAddrRange(constants.corporate_network_ip_range) ``` | **1** | Name to use when importing this set of derived roles. | | ----- | ---------------------------------------------------------------------------------------------------------------------------------------- | | **2** | [Constant definitions](variables.html#export-constants) to import (optional). | | **3** | [Local constant definitions](variables.html#local-constants) (optional). | | **4** | [Variable definitions](variables.html#export) to import (optional). | | **5** | [Local variable definitions](variables.html#local) (optional). | | **6** | Descriptive name for this derived role. | | **7** | The static roles (from the identity provider) to which this derived role applies to. The special value \* can be used to match any role. | | **8** | An (optional) set of expressions that should evaluate to true for this role to activate. | Understanding derived roles To explain the concept of derived roles, consider this example from the DC Comics universe: when billionaire playboy Bruce Wayne wears the bat costume he becomes Batman, the caped crusader. Becoming Batman gives Bruce extra privileges like being able to beat up criminals without any consequences and driving a tank through the streets of Gotham. In Cerbos terms, Batman is the `derived role` and Bruce Wayne is the `parentRole`. The `condition` for activating the Batman derived role is: `Bruce Wayne is wearing the bat costume`. Cerbos only ever deals with Bruce Wayne because he’s the only real person in this scenario. However, Cerbos is smart enough to treat him as Batman whenever he’s wearing his costume. ```yaml --- apiVersion: "api.cerbos.dev/v1" derivedRoles: name: gotham_city definitions: - name: batman parentRoles: ["bruce_wayne"] condition: match: expr: P.attr.isWearingBatCostume ``` Cerbos policies ==================== There are six kinds of Cerbos policies: [Derived roles](derived%5Froles.html) Traditional RBAC roles are usually broad groupings with no context awareness. Derived roles are a way of augmenting those broad roles with contextual data to provide more fine-grained control at runtime. For example, a person with the broad `manager` role can be augmented to `manager_of_scranton_branch` by taking into account the geographic location (or another factor) and giving that derived role bearer extra privileges on resources that belong to the Scranton branch. [Resource policies](resource%5Fpolicies.html) Defines rules for actions that can be performed on a given resource. A resource is an application-specific concept that applies to anything that requires access rules. For example, in an HR application, a resource can be as coarse-grained as a full employee record or as fine-grained as a single field in the record. [Principal policies](principal%5Fpolicies.html) Defines overrides for a specific user. [Role policies](role%5Fpolicies.html) Define rules specific to a given role. Rules are defined as a list of allowable actions that apply to a particular resource. [Exported variables](variables.html#export) Defines variables to be reused in condition expressions in other policies. [Exported constants](variables.html#export-constants) Defines constants to be reused in condition expressions in other policies. Policies are evaluated based on the metadata passed in the request to the Cerbos PDP. See [Cerbos API](../api/index.html) for more information. | | View the latest documentation and example requests by accessing a running Cerbos instance using a browser (). The OpenAPI (Swagger) schema can be obtained by accessing /schema/swagger.json as well. | | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | Outputs ==================== You can define an optional expression to be evaluated when a policy rule is fully activated (`action`, `roles` and`derivedRoles` match and `condition` is satisfied) or partially activated (`condition` is not satisfied). The collected outputs from all the rules are included in the Cerbos API response. Output expressions are useful if you want to take specific actions in your application based on the triggered rules. For example, if your policy contains a rule that denies access if the request is issued outside working hours, it could output a string that explains the restriction. Your application could then display that back to the user so that they know the specific reason why the request was denied. Consider the following policy definition: ```yaml --- apiVersion: api.cerbos.dev/v1 resourcePolicy: version: "default" resource: "system_access" rules: - name: working-hours-only actions: ['*'] effect: EFFECT_DENY roles: ['*'] condition: match: expr: now().getHours() > 18 || now().getHours() < 8 output: when: ruleActivated: |- {"principal": P.id, "resource": R.id, "timestamp": now(), "message": "System can only be accessed between 0800 and 1800"} conditionNotMet: |- {"principal": P.id, "resource": R.id, "timestamp": now(), "message": "System can be accessed at this time"} ``` If a request is made outside working hours, the response from Cerbos would resemble the following: ```json { "requestId": "xx-010023-23459", "results": [ { "resource": { "id": "bastion_002", "kind": "system_access" }, "actions": { "login": "EFFECT_DENY" }, "meta": { "actions": { "login": { "matchedPolicy": "resource.system_access.vdefault" } } }, "outputs": [ { "src": "resource.system_access.vdefault#working-hours-only", "val": { "message": "System can only be accessed between 0800 and 1800", "principal": "john", "resource": "bastion_002", "timestamp": "2023-06-02T21:53:58.319506543+01:00" } } ] } ] } ``` If a request is made inside working hours, the response would resemble the following: ```json { "requestId": "xx-010023-23459", "results": [ { "resource": { "id": "bastion_002", "kind": "system_access" }, "actions": { "login": "EFFECT_ALLOW" }, "meta": { "actions": { "login": { "matchedPolicy": "resource.system_access.vdefault" } } }, "outputs": [ { "src": "resource.system_access.vdefault#working-hours-only", "val": { "message": "System can be accessed at this time", "principal": "john", "resource": "bastion_002", "timestamp": "2023-06-02T21:53:58.319506543+01:00" } } ] } ] } ``` | | Depending on the evaluation result of the expression(s) under the condition.match, the result of the expressionoutput.when.ruleActivated or output.when.conditionNotMet will be rendered in the output. | | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | Output expressions can be any valid CEL expression. You can return simple values such as strings, numbers and booleans or complex values such as maps and lists. | | Excessive use of output expressions could affect policy evaluation performance. If you use them for debugging purposes, remember to remove them before going to production. | | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | Principal policies ==================== Principal policies define overrides for a specific user. ```yaml --- apiVersion: "api.cerbos.dev/v1" principalPolicy: principal: daffy_duck (1) version: "dev" (2) scope: "acme.corp" (3) scopePermissions: SCOPE_PERMISSIONS_REQUIRE_PARENTAL_CONSENT_FOR_ALLOWS (4) constants: import: (5) - apatr_common_constants local: (6) test_department_id: 12345 variables: import: (7) - apatr_common_variables local: (8) is_dev_record: |- request.resource.attr.dev_record == true || request.resource.attr.department_id == constants.test_department_id rules: - resource: leave_request (9) actions: - name: dev_record_wildcard (10) action: "*" (11) condition: (12) match: expr: variables.is_dev_record effect: EFFECT_ALLOW output: (13) when: ruleActivated: |- "wildcard_override:%s".format([request.principal.id]) conditionNotMet: |- "wildcard_condition_not_met:%s".format([request.principal.id]) - resource: employee_profile actions: - name: view_employee_profile action: "*" condition: match: all: of: - expr: variables.is_dev_record - expr: request.resource.attr.public == true effect: EFFECT_ALLOW - resource: salary_record actions: - action: "*" effect: EFFECT_DENY ``` | **1** | Principal to whom this policy applies. | | ------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | | **2** | Version of this policy. Policies are uniquely identified by the principal name and version pair. You can have multiple policy versions for the same principal (e.g. production vs. staging). The version value default is special as it is the default fallback when no version is specified in the request. | | **3** | Optional [scope](scoped%5Fpolicies.html) for this policy. | | **4** | Optional [scope permission](scope%5Fpermissions.html) for this policy, defaults to SCOPE\_PERMISSIONS\_OVERRIDE\_PARENT. | | **5** | [Constant definitions](variables.html#export-constants) to import (optional). | | **6** | [Local constant definitions](variables.html#local-constants) (optional). | | **7** | [Variable definitions](variables.html#export) to import (optional). | | **8** | [Local variable definitions](variables.html#local) (optional). | | **9** | Resource to which this override applies. Wildcards are supported here. | | **10** | Optional name for the rule. | | **11** | Actions that can be performed on the resource. Wildcards are supported here. | | **12** | Optional conditions required to match this rule. | | **13** | Optional output for the action rule. You can define optional expressions to be evaluated as output depending on whether the rule is activated or not activated because of a condition failure. | Resource policies ==================== Resource policies define rules for actions that can be performed on a given resource. A resource is an application-specific concept that applies to anything that requires access rules. For example, in an HR application, a resource can be as coarse-grained as a full employee record or as fine-grained as a single field in the record. Multiple rules can be defined for the same action on a resource for different roles and/or with different conditions. If more than one rule matches a given input, then a rule specifying `EFFECT_DENY` will take precedence over one specifying `EFFECT_ALLOW`. ```yaml --- apiVersion: api.cerbos.dev/v1 resourcePolicy: resource: "album:object" (1) version: "default" (2) scope: "acme.corp" (3) scopePermissions: SCOPE_PERMISSIONS_REQUIRE_PARENTAL_CONSENT_FOR_ALLOWS (4) importDerivedRoles: - apatr_common_roles (5) constants: import: (6) - apatr_common_constants local: (7) corporate_network_ip_range: 10.20.0.0/16 variables: import: (8) - apatr_common_variables local: (9) is_corporate_network: |- request.principal.attr.ip_address.inIPAddrRange(constants.corporate_network_ip_range) rules: - actions: ['*'] (10) effect: EFFECT_ALLOW derivedRoles: - owner (11) - actions: ['view'] effect: EFFECT_ALLOW roles: - user (12) condition: match: expr: request.resource.attr.public == true output: (13) when: ruleActivated: |- "view_allowed:%s".format([request.principal.id]) conditionNotMet: |- "view_not_allowed:%s".format([request.principal.id]) - name: moderator_rule (14) actions: ['view', 'delete'] effect: EFFECT_ALLOW condition: match: expr: variables.is_corporate_network derivedRoles: - abuse_moderator schemas: (15) principalSchema: ref: cerbos:///principal.json (16) resourceSchema: ref: cerbos:///album/object.json (17) ``` | **1** | Kind of resource to which this policy applies. | | ------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | **2** | Version of this policy. Policies are uniquely identified by the resource name and version pair. You can have multiple policy versions for the same resource (e.g. production vs. staging). The version value default is special as it is the default fallback when no version is specified in the request. | | **3** | Optional [scope](scoped%5Fpolicies.html) for this policy. | | **4** | Optional [scope permission](scope%5Fpermissions.html) for this policy, defaults to SCOPE\_PERMISSIONS\_OVERRIDE\_PARENT. | | **5** | Import a set of [derived roles](derived%5Froles.html) (optional). | | **6** | [Constant definitions](variables.html#export-constants) to import (optional). | | **7** | [Local constant definitions](variables.html#local-constants) (optional). | | **8** | [Variable definitions](variables.html#export) to import (optional). | | **9** | [Local variable definitions](variables.html#local) (optional). | | **10** | Actions can contain wildcards. Wildcards honour the : delimiter. E.g. a:\*:d would match a:x:d but not a:x. | | **11** | This rule applies to a derived role. | | **12** | Rules can also refer directly to static roles. The special value \* can be used to disregard roles when evaluating the rule. | | **13** | Optional output for the action rule. You can define optional expressions to be evaluated as output depending on whether the rule is activated or not activated because of a condition failure. | | **14** | Optional name for the rule. | | **15** | Optional section for defining schemas that apply to this resource kind. | | **16** | Optional schema for validating the principal attributes. | | **17** | Optional schema for validating the resource attributes. | Role policies ==================== Role policies are ABAC policies in which you specify a number of resources, each with a set of allowable actions that the role can carry out on the resource. Optionally, a condition can also be specified for each set of allowable actions. In the simple case, they allow you to author permissions from the view of an IdP role, rather than for a given resource. Unlike resource and principal policies, role policies do not define explicit `ALLOW` or `DENY` effects. Instead, the **allowable actions** act as an exhaustive list of actions allowed on each resource. Any resource and action pair not defined in this list is immediately denied for that role. The name of a role policy is effectively a custom role within the context of Cerbos. A role policy (custom role) can optionally define `parentRoles`, inheriting and narrowing their permissions by default. The policy can only define rules that are a strict subset of the parent role’s permissions and cannot introduce any extra rules beyond what the parent roles allow. They can immediately DENY an action but if they ALLOW an action, a parent policy higher up the scope chain must also ALLOW the same action. A parent role can be either an arbitrary IdP role or the name of another role policy within the system. Parent role resolution is recursive—if a custom role inherits from another custom role that also has parent roles, it inherits and narrows their permissions as well. ```yaml --- apiVersion: api.cerbos.dev/v1 rolePolicy: role: "acme_admin" (1) scope: "acme.hr.uk" (2) parentRoles: (3) - admin rules: - resource: leave_request (4) allowActions: (5) - view:* (6) - deny - resource: salary_record allowActions: - edit condition: (7) match: expr: R.attr.owner == P.id - resource: "*" (8) allowActions: ["create"] ``` | **1** | The role to which this policy applies. | | ----- | -------------------------------------------------------------------------------- | | **2** | Optional principal [scope](scoped%5Fpolicies.html) for this policy. | | **3** | The list of parent roles that the custom role inherits. | | **4** | The resource to which the following rule applies. | | **5** | The list of allowable actions that the role can carry out on the given resource. | | **6** | Wildcard actions are supported. | | **7** | A condition that must be met for the action to be allowed. | | **8** | Wildcard resources are also supported. | Schemas ==================== Cerbos policies rely on context data about the principal and the resource(s) that are submitted through the `attr` fields of the [API request](../api/index.html). While the free-form nature of these fields gives you maximum flexibility to author policies that work on data of any shape or form, they can become difficult to reason about and make it harder to enforce system-wide standards and conventions. Using the [JSON Schema](http://json-schema.org) support built into Cerbos, you can define schemas for all your principal and resource attributes on a per-resource basis by specifying them in the resource policy. The Cerbos PDP will validate the incoming requests and either log warnings or completely reject them based on the schema enforcement configuration in effect. ## [](#%5Fdefine%5Fschemas)Define schemas Cerbos schemas are standard [JSON Schemas](http://json-schema.org/specification.html) (draft 2020-12). If you are using any of `disk`, `git` or `blob` [storage drivers](../configuration/storage.html) the schemas are expected to be in a special directory named `_schemas` located at the root of the storage directory or bucket. Use the [Admin API](../api/admin%5Fapi.html) to add or update schemas if you are using one of the database drivers. To avoid repetition, you can define common schema fragments inline using `$defs` or refer to other schemas using `$ref` (see ). When using `$ref` to refer to another schema stored in Cerbos storage, make sure to use an absolute URL with `cerbos` as the scheme. For example, use `cerbos:///common/address.json` to refer to a schema file stored in `_schemas/common/address.json` (if using one of the disk-based stores). This ensures that policies remain portable between different environments. customer.json: a schema that references another schema to avoid repetition ```json { "$schema": "https://json-schema.org/draft/2020-12/schema", "type": "object", "properties": { "first_name": { "type": "string" }, "last_name": { "type": "string" }, "shipping_address": { "$ref": "cerbos:///address.json" }, "billing_address": { "$ref": "cerbos:///address.json" } }, "required": ["first_name", "last_name", "shipping_address", "billing_address"] } ``` address.json: the schema referenced by customer.json ```json { "$schema": "https://json-schema.org/draft/2020-12/schema", "type": "object", "properties": { "street_address": { "type": "string" }, "city": { "type": "string" }, "state": { "type": "string" } }, "required": ["street_address", "city", "state"] } ``` ## [](#%5Fvalidate%5Frequests%5Fusing%5Fschemas)Validate requests using schemas First, update your resource policy to point to the schemas that should be used to validate requests for that resource kind. For example, the following resource policy requires all requests for `album:object` resource kind to be validated using `principal.json` for the principal attributes and `album/object.json` for the resource attributes respectively. Example ```yaml --- apiVersion: api.cerbos.dev/v1 resourcePolicy: importDerivedRoles: - apatr_common_roles resource: "album:object" rules: - actions: ['create'] effect: EFFECT_ALLOW derivedRoles: - owner - actions: ['view'] effect: EFFECT_ALLOW roles: - user condition: match: expr: request.resource.attr.public == true schemas: (1) principalSchema: (2) ref: cerbos:///principal.json resourceSchema: (3) ref: cerbos:///album/object.json ignoreWhen: (4) actions: ['create', 'delete:*'] ``` | **1** | Schema definition block. Optional. Leave this out if you do not want to use schema validation for this resource type. | | ----- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | **2** | Schema for validating the principal attributes. Optional. Leave this out if you do not want to validate the principal. | | **3** | Schema for validating the resource attributes. Optional. Leave this out if you do not want to validate the resource. | | **4** | Ignore block. Optional. Define the actions for which schema validation should be ignored. This is useful for special cases like CREATE where your resource might not have all the required attributes to pass schema validation. | Finally, [configure the schema enforcement level](../configuration/schema.html) of the Cerbos PDP to either `warn` or `reject` and restart it. Now the PDP will validate any requests where the matching resource policy has schemas specified. | | If enforcement level is reject and the request is invalid according to the schema, the effect for all actions will be set to EFFECT\_DENY. If enforcement level is warn, then Cerbos will still evaluate the policies and return the effects determined by the policy. | | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | Example: CheckResourceSet API response containing validation errors ```json { "requestId": "test", "resourceInstances": { "XX125": { "actions": { "approve": "EFFECT_DENY", "create": "EFFECT_DENY", "defer": "EFFECT_ALLOW", "view:public": "EFFECT_ALLOW" }, "validationErrors": [ { "path": "/department", "message": "value must be one of \"marketing\", \"engineering\"", "source": "SOURCE_PRINCIPAL" }, { "path": "/department", "message": "value must be one of \"marketing\", \"engineering\"", "source": "SOURCE_RESOURCE" } ] } } } ``` Scope Permissions ==================== `scopePermissions` is a setting applied to resource and principal policies that impacts how rules are evaluated within a scope hierarchy. It defines whether policies in a given scope can **override** parent scope rules or whether they can only **restrict** the permissions granted by parent scopes. All resource or principal policies within the same scope **must** use the same `scopePermissions` setting. If conflicting settings are detected within a shared scope, a build-time error will occur. There are two available settings: * `SCOPE_PERMISSIONS_OVERRIDE_PARENT` * `SCOPE_PERMISSIONS_REQUIRE_PARENTAL_CONSENT_FOR_ALLOWS` | | By default, resource and principal policies use SCOPE\_PERMISSIONS\_OVERRIDE\_PARENT unless explicitly set otherwise. | | ------------------------------------------------------------------------------------------------------------------------ | ### [](#%5Fscope%5Fpermissions%5Foverride%5Fparent)SCOPE\_PERMISSIONS\_OVERRIDE\_PARENT This is the default evaluation strategy for scoped policies. Cerbos starts evaluating policies from the bottom of the scope chain and moves up. The first policy to produce a decision for a given action is the winner. Any policies further up the chain cannot influence that decision. * If an input matches a rule and its condition is met, the specified effect is applied (no need to check parents). * If a rule is matched but its condition is not met, or if a rule is not matched, evaluation continues up the hierarchy. ### [](#%5Fscope%5Fpermissions%5Frequire%5Fparental%5Fconsent%5Ffor%5Fallows)SCOPE\_PERMISSIONS\_REQUIRE\_PARENTAL\_CONSENT\_FOR\_ALLOWS When a policy is configured with `scopePermissions: SCOPE_PERMISSIONS_REQUIRE_PARENTAL_CONSENT_FOR_ALLOWS`, it **inherits and restricts** the permissions of parent scopes. Policies at this level must define rules within the maximum set of permissions allowed by parent policies—they cannot introduce new permissions that exceed what a parent scope already permits. In this mode, an `ALLOW` rule that matches an action doesn’t immediately generate an `ALLOW` decision. A parent policy higher up in the scope chain must also `ALLOW` that same action in order to produce a definitive decision. However, if a rule is matched but its condition is not met, the request is implicitly denied. * If an input is not matched, evaluation continues up the scope hierarchy. * If a rule is matched but its condition is not met, an implicit DENY is issued. * If a rule matches and the condition is met, evaluation continues to parent policies to verify that the action is also allowed at a higher level. Scoped policies ==================== | | Scoped Policies are optional and are only evaluated if a "scope" is passed in the request, and there are matching "scope" attributes defined in the policies. | | ---------------------------------------------------------------------------------------------------------------------------------------------------------------- | | | Resource and principal policies can define "scopePermissions", which affects how rules are applied across scopes. See the [scope permissions documentation](scope%5Fpermissions.html) for more details. | | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | Scoped policies offer a way to model hierarchical relationships that regularly occur in many situations. Typically, the requirement is to have a base set of policies that can then be overridden for specific cases. For example, a multi-tenant SaaS system could have a standard set of access rules that can then be customised to suit the requirements of different tenants. Another example is a large organization that might want to have regional or departmental customisations to their global access rules. ![hierarchy](_images/hierarchy.png) Cerbos resource and principal policies have an optional `scope` field that can be used to indicate that they are part of a set of policies that must be evaluated together. Additionally, resource and principal policies within the same scope must use the same `scopePermissions` setting to define how rules interact across scope levels. ```yaml apiVersion: api.cerbos.dev/v1 resourcePolicy: version: "default" scope: "acme.corp" (1) scopePermissions: SCOPE_PERMISSIONS_OVERRIDE_PARENT (2) resource: "album:object" rules: - actions: ['*'] effect: EFFECT_ALLOW roles: ["admin"] ``` | **1** | Scope definition | | ----- | ------------------------- | | **2** | Scope permissions setting | The value of `scope` is a dot-separated string where each dotted segment defines an ancestor. During policy evaluation, the Cerbos engine starts with the most specific scoped policy and moves up the hierarchy. NOTE: The value of the `scopePermissions` field affects the policy evaluation behaviour. See [scope permissions](scope%5Fpermissions.html) for more information. For example, consider a policy with the scope `a.b.c`. The Cerbos engine could process up to four policies to arrive at the final decision: * scope `a.b.c` * scope `a.b` * scope `a` * scope \`\` (no scope) To illustrate, consider the following Check request: ```json { "requestId": "test01", "actions": ["view", "comment"], "resource": { "kind": "album:object", "policyVersion": "default", "scope": "customer.abc", (1) "instances": { "XX125": { "attr": { "owner": "alicia", "public": false, "tags": ["x", "y"], } } } }, "principal": { "id": "alicia", "policyVersion": "default", "scope": "customer", (2) "roles": ["user"], "attr": { "geography": "GB" } } } ``` | **1** | Optional resource scope | | ----- | ------------------------ | | **2** | Optional principal scope | When processing the above request, the decision flow chart for the Cerbos engine would look like the following: ![decision flow](_images/decision_flow.png) ## [](#%5Fworking%5Fwith%5Fscoped%5Fpolicies)Working with scoped policies * The policy without any scope defined is always the base policy. It is used by default if a request does not specify any scope. * Scope permissions must be consistent within the same scope. If conflicting `scopePermissions` settings are detected in policies within a shared scope, a build-time error will occur. * Scope traversal behaviour depends on `scopePermissions`: * With `SCOPE_PERMISSIONS_OVERRIDE_PARENT`, the first policy to return a decision wins for each action. * With `SCOPE_PERMISSIONS_REQUIRE_PARENTAL_CONSENT_FOR_ALLOWS`, leaf nodes can only **restrict** access and must conform to parent permissions. * There must be no gaps in the policy chain. For example, if you define a policy with scope `a.b.c`, then policies with scopes `a.b`, `a`, and no-scope should also exist in the policy repository. * [Schemas](schemas.html) must be the same among all the policies in the chain. The schemas used to validate the request are taken from the base policy (policy without a scope). Schemas defined in other policies of the chain will be ignored. * First match wins (when using `SCOPE_PERMISSIONS_OVERRIDE_PARENT`): Scoped policies are evaluated from the most specific to the least specific. The first policy to produce a decision (ALLOW/DENY) for an action is the winner. The remaining policies cannot override the decision for that particular action. * Parent constraints apply (when using `SCOPE_PERMISSIONS_REQUIRE_PARENTAL_CONSENT_FOR_ALLOWS`): The most specific policies can only **restrict permissions** further, not grant new ones. * **Explicit imports for derived roles and variables**: Variables and derived roles imports are not inherited between policies. Explicitly import any derived roles and re-define any variables in each policy that requires them. * Unless [lenient scope search](../configuration/engine.html#lenient%5Fscopes) is enabled, a policy file matching the exact scope requested in the API request must exist in the store. Variables and constants ==================== ## [](#variables)Variables You can use variables to reduce duplication in [policy condition expressions](conditions.html). Variables may either be defined locally within a policy, or in a standalone `exportVariables` policy file that can be imported by other policies. ### [](#local)Defining local variables Local variables are only accessible from the policy they are defined. In particular, local variables defined for derived roles can’t be used in resource policies that import the derived roles. ```yaml --- apiVersion: api.cerbos.dev/v1 resourcePolicy: variables: local: (1) flagged_resource: request.resource.attr.flagged label: '"latest"' # assigning a string literal teams: '["red", "blue"]' # assigning an array literal lookup: '{"red": 9001, "blue": 0}' # assigning a map literal # ... ``` | **1** | Map of variable name to expression. | | ----- | ----------------------------------- | ### [](#export)Defining and importing exported variables To reuse variables between policies, they can be exported from a separate file. ```yaml --- apiVersion: api.cerbos.dev/v1 description: Common variables used within the Apatr app exportVariables: name: apatr_common_variables (1) definitions: (2) flagged_resource: request.resource.attr.flagged label: '"latest"' # assigning a string literal teams: '["red", "blue"]' # assigning an array literal lookup: '{"red": 9001, "blue": 0}' # assigning a map literal ``` | **1** | Name to use when importing this set of variables. | | ----- | ------------------------------------------------- | | **2** | Map of variable name to expression. | Other policies can then import the variables by name. ```yaml --- apiVersion: api.cerbos.dev/v1 resourcePolicy: variables: import: (1) - apatr_common_variables # ... ``` | **1** | List of names of variable sets to import. | | ----- | ----------------------------------------- | ### [](#%5Fusing%5Fvariables%5Fin%5Fpolicy%5Fconditions)Using variables in policy conditions Variables can be referenced via the `variables` (aliased to `V`) special variable in policy condition expressions. ```yaml --- condition: match: expr: variables.flagged_resource ``` Local and imported variable definitions are merged, and each variable is evaluated before any rule condition. If a variable is defined in more than one location, the policy will fail to compile. ### [](#%5Ftop%5Flevel%5Fvariables%5Ffield)Top-level variables field In earlier versions of Cerbos, local variables were defined in a top-level `variables` field in the policy file. This field is deprecated in favour of the `variables.local` section within the policy body. For backwards compatibility, the deprecated top-level field is merged with the `variables.local` section in derived roles, resource, and principal policies. ## [](#constants)Constants Variables are expressions that are evaluated at runtime. That makes them slightly awkward to use with literal values, because you have to quote the value to make it a valid [Common Expression Language (CEL)](https://github.com/google/cel-spec/blob/master/doc/intro.md) expression. Constants are an alternative to defining variables with literal values, which allow the values to be written using standard YAML or JSON syntax. ### [](#local-constants)Defining local constants Local constants are only accessible from the policy they are defined. In particular, local constants defined for derived roles can’t be used in resource policies that import the derived roles. ```yaml --- apiVersion: api.cerbos.dev/v1 resourcePolicy: constants: local: (1) label: latest teams: - red - blue lookup: red: 9001 blue: 0 # ... ``` | **1** | Map of constant name to value. | | ----- | ------------------------------ | ### [](#export-constants)Defining and importing exported constants To reuse constants between policies, they can be exported from a separate file. ```yaml --- apiVersion: api.cerbos.dev/v1 description: Common constants used within the Apatr app exportConstants: name: apatr_common_constants (1) definitions: (2) label: latest teams: - red - blue lookup: red: 9001 blue: 0 ``` | **1** | Name to use when importing this set of constants. | | ----- | ------------------------------------------------- | | **2** | Map of constant name to value. | Other policies can then import the constants by name. ```yaml --- apiVersion: api.cerbos.dev/v1 resourcePolicy: constants: import: (1) - apatr_common_constants # ... ``` | **1** | List of names of constant sets to import. | | ----- | ----------------------------------------- | ### [](#%5Fusing%5Fconstants%5Fin%5Fpolicy%5Fconditions)Using constants in policy conditions Constants can be referenced via the `constants` (aliased to `C`) special variable in policy condition expressions. ```yaml --- condition: match: expr: constants.lookup[request.principal.attr.team] > 9000 ``` Local and imported constant definitions are merged. If a constant is defined in more than one location, the policy will fail to compile. Integrating permission checks into your user interface ==================== It’s a common requirement to make permission checks in the user interface layer of your application. For example, you might want to hide the "Edit" button if the current user isn’t allowed to edit the corresponding resource. You can tackle this by checking the user’s permissions in the back end of your application and including the results in your API responses, by calling the Cerbos PDP directly from the browser, or by evaluating your policies in the browser. | | Checking permissions in the user interface is not a substitute for performing checks in the back end. | | -------------------------------------------------------------------------------------------------------- | ## [](#%5Fincluding%5Fpermissions%5Fin%5Fapi%5Fresponses)Including permissions in API responses You can add a `permissions` field to relevant API responses, and populate it by calling the Cerbos PDP’s [CheckResources](../api/index.html#check-resources) API with multiple actions. For example, an API response from a blog application might look like this: ```json { "blog_post": { "title": "Why are we building Cerbos?", "author": "Emre Baran & Charith Ellawala", "permissions": { "edit": true, "delete": false } } } ``` This pattern can be readily tailored to your requirements. It’s a great way to ensure that the front and back ends of your application agree on your policy rules. ## [](#%5Fcalling%5Fthe%5Fcerbos%5Fpdp%5Ffrom%5Fthe%5Fbrowser)Calling the Cerbos PDP from the browser The Cerbos PDP API is available via REST, so you can perform permissions checks directly from the browser. The [@cerbos/http JavaScript SDK](https://www.npmjs.com/package/@cerbos/http) wraps the REST API to make it easier to integrate into your application. | | Exposing the PDP to the internet has security and performance implications. An attacker could use the API to probe your authorization policies much more easily than through your user interface. You could mitigate this to some extent by keeping the PDP behind a reverse proxy that authenticates and rate-limits API calls. You might also want to use a separate deployment with only a subset of your policies. | | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ## [](#%5Fevaluating%5Fpolicies%5Fin%5Fthe%5Fbrowser)Evaluating policies in the browser You can use [Cerbos Hub’s embedded PDPs](https://docs.cerbos.dev/cerbos-hub/decision-points-embedded) to evaluate your authorization policies directly in the browser. This allows you to perform permission checks on the front end without changing the back end. Install from binary ==================== Cerbos binaries are available for multiple operating systems and architectures. See the [releases page](https://github.com/cerbos/cerbos/releases/tag/v0.45.1) for all available downloads. | OS | Arch | Bundle | | ----- | --------- | -------------------------------------- | | Linux | x86-64 | cerbos\_0.45.1\_Linux\_x86\_64.tar.gz | | Linux | arm64 | cerbos\_0.45.1\_Linux\_arm64.tar.gz | | MacOS | universal | cerbos\_0.45.1\_Darwin\_all.tar.gz | | MacOS | x86-64 | cerbos\_0.45.1\_Darwin\_x86\_64.tar.gz | | MacOS | arm64 | cerbos\_0.45.1\_Darwin\_arm64.tar.gz | You can download the binaries by running the following command. Substitute `` with the appropriate value from the above table. ```sh curl -L -o cerbos.tar.gz "https://github.com/cerbos/cerbos/releases/download/v0.45.1/" tar xvf cerbos.tar.gz chmod +x cerbos ``` | | Cerbos binaries are signed using [sigstore](https://www.sigstore.dev) tools during the automated build process and the verification bundle is published along with the binary as .bundle. The following example demonstrates how to verify the Linux X86\_64 bundle archive. \# Download the bundle archive curl -L \\ \-o cerbos\_0.45.1\_Linux\_x86\_64.tar.gz \\ "https://github.com/cerbos/cerbos/releases/download/v0.45.1/cerbos\_0.45.1\_Linux\_x86\_64.tar.gz" \# Download the verification bundle curl -L \\ \-o cerbos\_0.45.1\_Linux\_x86\_64.tar.gz.bundle \\ "https://github.com/cerbos/cerbos/releases/download/v0.45.1/cerbos\_0.45.1\_Linux\_x86\_64.tar.gz.bundle" \# Verify the signature cosign verify-blob \\ \--certificate-oidc-issuer="https://token.actions.githubusercontent.com" \\ \--certificate-identity="https://github.com/cerbos/cerbos/.github/workflows/release.yaml@refs/tags/v0.45.1" \\ \--bundle="cerbos\_0.45.1\_Linux\_x86\_64.tar.gz.bundle" \\ "cerbos\_0.45.1\_Linux\_x86\_64.tar.gz" | | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ## [](#linux-packages)Linux packages Cerbos DEB and RPM packages can be installed on any Linux distribution that supports one of those package formats. You can download the appropriate package for your system from the [releases page](https://github.com/cerbos/cerbos/releases/tag/v0.45.1). | | Cerbos packages are currently only designed to work with systems where systemd is the init system. If you use a different init system, consider installing cerbos from the tarballs instead. | | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | The packages install the `cerbos` and `cerbosctl` binaries to `/usr/local/bin` and create a systemd service to automatically start the Cerbos server. The default configuration is setup to look for policies in `/var/cerbos/policies` but you can change this by editing `/etc/cerbos/yaml` and reloading the service with `sudo systemctl restart cerbos`. ```sh # Show status of the service sudo systemctl status cerbos # Restart the service sudo systemctl restart cerbos # View logs sudo journalctl -xeu cerbos.service ``` ## [](#homebrew)Homebrew You can install Cerbos binaries using Homebrew as well. ```sh brew tap cerbos/tap brew install cerbos ``` ## [](#npm)npm You can install Cerbos binaries from the npm registry. This removes a separate setup step for JavaScript projects and allows you to lock Cerbos to a specific version to ensure a consistent development environment. [cerbos](https://www.npmjs.com/package/cerbos) and [cerbosctl](https://www.npmjs.com/package/cerbosctl) are available as separate packages. ```sh npm install --save-dev cerbos cerbosctl ``` Note that the npm packages rely on platform-specific optional dependencies, so make sure you don’t omit these when installing dependencies (for example, don’t pass the `--no-optional` flag to `npm`). ## [](#nix)Nix flake A [Nix flake](https://nixos.wiki/wiki/Flakes) is available at . ```none # Launch a Cerbos server nix run github:cerbos/cerbos-flake#cerbos -- server --set=storage.disk.directory=/path/to/policy_directory # Launch a REPL nix run github:cerbos/cerbos-flake#cerbos -- repl # Launch cerbosctl nix run github:cerbos/cerbos-flake#cerbosctl # Start a Nix shell session with cerbos and cerbosctl installed nix shell github:cerbos/cerbos-flake ``` Run from container ==================== ```sh docker run --rm --name cerbos -p 3592:3592 ghcr.io/cerbos/cerbos:0.45.1 ``` | | Cerbos images can be verified using [sigstore](https://www.sigstore.dev) tools as follows: cosign verify \\ \--certificate-oidc-issuer="https://token.actions.githubusercontent.com" \\ \--certificate-identity="https://github.com/cerbos/cerbos/.github/workflows/release.yaml@refs/tags/v0.45.1" \\ ghcr.io/cerbos/cerbos:0.45.1 | | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | By default, the container is configured to listen on ports 3592 (HTTP) and 3593 (gRPC) and watch for policy files on the volume mounted at `/policies`. You can override these by creating a new [configuration file](../configuration/index.html). Create a directory to hold the config file and policies. ```sh mkdir -p cerbos-quickstart/policies ``` Create a config file. ```sh cat > cerbos-quickstart/.cerbos.yaml < base.yaml exec kubectl kustomize ``` Test that the patch works as expected: ```sh helm template cerbos/cerbos --post-renderer ./kustomize.sh ``` Now you can install Cerbos with your patches: ```sh helm install cerbos cerbos/cerbos --version=0.45.1 --post-renderer=./kustomize.sh ``` ## [](#%5Fdeploy%5Fcerbos%5Fconfigured%5Fto%5Fread%5Fpolicies%5Ffrom%5Fa%5Fgithub%5Frepository)Deploy Cerbos configured to read policies from a GitHub repository * Follow the instructions at to create a personal access token (PAT) with `repo` permissions. * Create a new Kubernetes secret to hold the PAT ```sh PAT=YOUR_GITHUB_PAT kubectl create secret generic cerbos-github-token --from-literal=GITHUB_TOKEN=$PAT ``` * Create a new values file named `git-values.yaml` with the following contents: ```yaml envFrom: - secretRef: name: cerbos-github-token (1) cerbos: config: # Configure the git storage driver storage: driver: "git" git: protocol: https # Replace with the URL of your GitHub repo. url: https://github.com/cerbos/sample-policies.git # Replace with the branch name of your repo. branch: main # Remove or leave empty if the policies are not stored in a subdirectory. subDir: hr # Path to checkout. By default, /work is a Kubernetes emptyDir volume that is only available for the lifetime of the pod. # If you want the work directory to persist between pod restarts, specify the mount path of a persistent volume here. checkoutDir: /work # How often the remote repo should be checked for updates. updatePollInterval: 60s # Credentials used to login to the remote GitHub repo. We are using an environment variable mounted from the secret we created earlier. https: username: "cerbos" (2) password: "${GITHUB_TOKEN}" (3) ``` | **1** | Create an environment variable from the secret we created | | ----- | ---------------------------------------------------------------------------------- | | **2** | Username should be set to a string value (can be any value if using GitHub) | | **3** | Use the environment variable containing the PAT as the password to login to GitHub | * Deploy Cerbos using the Helm chart ```sh helm install cerbos cerbos/cerbos --version=0.45.1 --values=git-values.yaml ``` ## [](#%5Fdeploy%5Fcerbos%5Fconfigured%5Fto%5Fread%5Fpolicies%5Ffrom%5Fa%5Fmounted%5Fvolume)Deploy Cerbos configured to read policies from a mounted volume Here we demonstrate how to use a `hostPath` volume to feed policies to a Cerbos deployment. You can easily substitute the `hostPath` volume type with any other type of volumes supported by Kubernetes. See . * Create a new values file named `pv-values.yaml` with the following contents: ```yaml volumes: (1) - name: cerbos-policies hostPath: path: /data/cerbos-policies volumeMounts: (2) - name: cerbos-policies mountPath: /policies readOnly: true cerbos: config: storage: driver: "disk" disk: directory: /policies (3) watchForChanges: true ``` | **1** | Define a hostPath volume type | | ----- | ---------------------------------------------------------------------- | | **2** | Mount the volume to the container at the path /policies | | **3** | Configure Cerbos to read policies from the mounted /policies directory | * Deploy Cerbos using the Helm chart ```sh helm install cerbos cerbos/cerbos --version=0.45.1 --values=pv-values.yaml ``` ## [](#%5Fdeploy%5Fa%5Fpdp%5Fconnected%5Fto%5Fcerbos%5Fhub)Deploy a PDP connected to Cerbos Hub | | Requires a [Cerbos Hub](https://www.cerbos.dev/product-cerbos-hub) account. [![Try Cerbos Hub](../_images/try_cerbos_hub.png)](https://hub.cerbos.cloud) | | ----------------------------------------------------------------------------------------------------------------------------------------------------------- | * Create a new Kubernetes secret to hold the Cerbos Hub credentials ```sh kubectl create secret generic cerbos-hub-credentials \ --from-literal=CERBOS_HUB_CLIENT_ID=YOUR_CLIENT_ID \ (1) --from-literal=CERBOS_HUB_CLIENT_SECRET=YOUR_CLIENT_SECRET \ (2) --from-literal=CERBOS_HUB_WORKSPACE_SECRET=YOUR_WORKSPACE_SECRET (3) ``` | **1** | Client ID from the Cerbos Hub credential | | ----- | -------------------------------------------- | | **2** | Client secret from the Cerbos Hub credential | | **3** | Cerbos Hub workspace secret | * Create a new values file named `hub-values.yaml` with the following contents: ```yaml cerbos: config: # Configure the Hub storage driver storage: driver: "hub" # Configure deployment label. Alternatively, add `CERBOS_HUB_BUNDLE=` to the secret you created above. hub: remote: bundleLabel: "YOUR_LABEL" (1) # Configure the Hub audit backend audit: enabled: true (2) backend: "hub" hub: storagePath: /audit_logs # Create environment variables from the secret. envFrom: - secretRef: name: cerbos-hub-credentials # Mount volume for locally buffering the audit logs. A persistent volume is recommended for production use cases. volumes: - name: cerbos-audit-logs emptyDir: {} volumeMounts: - name: cerbos-audit-logs mountPath: /audit_logs ``` | **1** | The label to watch for bundle updates. See [deployment labels documentation](#cerbos-hub:ROOT:deployment-labels.adoc) for details. | | ----- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | **2** | Enables audit log collection. See [Hub audit log collection documentation](../../../cerbos-hub/audit-log-collection.html) for information about masking sensitive fields and other advanced settings. | * Deploy Cerbos using the Helm chart ```sh helm install cerbos cerbos/cerbos --version=0.45.1 --values=hub-values.yaml ``` Tutorial ==================== ## [](#%5Fcerbforce)Cerbforce 'Cerbforce' is the new killer CRM system that is taking the world by storm. It began as a small SaaS app that has grown into an enterprise-scale, multi-tenant, global powerhouse. It is now at the point where the existing basic permission model created at the start of development is no longer fit for purpose and [Cerbos](https://cerbos.dev/) has been selected as the system to implement. This tutorial walks through the decision-making process for implementing [Cerbos](https://cerbos.dev/). It covers setting up, defining the various resources and policies for the different objects and users in the system, and evolve them to make use of all of Cerbos' features. Running locally ==================== As the developers of Cerbforce began their investigation of the system, the first step was getting a Cerbos instance up and running locally. ## [](#%5Fcontainer)Container If you have Docker, you can simply use the published images. The container already ships with a default configuration that has a `disk` driver configured to look for policies mounted at `/policies`. Create an empty policy folder at `policies/`, and then run the following: ```sh docker run --rm --name cerbos -t \ -v $(pwd)/policies:/policies \ -p 3592:3592 \ ghcr.io/cerbos/cerbos:latest server ``` ## [](#%5Fbinary)Binary Alternatively, if you don’t have Docker running, you can opt to use the release binary directly which you can download from [here](../installation/binary.html). ### [](#%5Fconfig%5Ffile)Config file In order to run the binary, you’ll need to create a minimal server configuration file. The simplest configuration to get up and running (using a local folder for storage of policies) requires only the port and location to be set: ```yaml --- server: httpListenAddr: ":3592" storage: driver: "disk" disk: directory: policies ``` | | You can find the full configuration schema in the [Cerbos docs](../configuration/index.html). | | ------------------------------------------------------------------------------------------------ | Save this configuration to a file named `.cerbos.yaml`. You’ll also need to create an empty policy folder `policies/`. Now, extract the binary and run: ```sh ./cerbos server --config=.cerbos.yaml ``` Once started you can open `` to see the API documentation. Resource definition ==================== | | The policies for this section can be found [on GitHub](https://github.com/cerbos/cerbos/tree/main/docs/modules/ROOT/examples/tutorial/03-resource-definition/cerbos). | | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ## [](#%5Fauthentication%5Froles)Authentication roles To begin with Cerbos needs to know about the basic roles which are provided by your authentication provider. In the case of Cerbforce, [Auth0](https://cerbos.dev/ecosystem/cerbos-auth0) provides a role of either `ADMIN` or `USER` for all profiles. This is important when starting to define access to resources below - for now just make a note of them. ## [](#%5Fresources)Resources The best place to start with defining [policies](../policies/index.html) is listing out all the resources and their actions that exist in the system. A resource is an entity type that users are authorized to access. In the case of Cerbforce some of the resources and actions are as follows: | Resource | Actions | | -------- | ---------------------------- | | User | Create, Read, Update, Delete | | Company | Create, Read, Update, Delete | | Contact | Create, Read, Update, Delete | With this as a start, you can begin creating your first Cerbos policy - a [resource policy](../policies/resource%5Fpolicies.html). ## [](#%5Fresource%5Fpolicies)Resource policies Taking the user resource as an example, the most basic resource policy can be defined like below: ```yaml apiVersion: api.cerbos.dev/v1 resourcePolicy: version: "default" resource: "user" rules: - actions: - create - read effect: EFFECT_ALLOW roles: - user - actions: - create - read - update - delete effect: EFFECT_ALLOW roles: - admin ``` The structure of a resource policy requires a name to be set on the `resource` key and then a list of rules is defined. A rule defines a list of actions on the resource, the effect of the rule (`EFFECT_ALLOW` or `EFFECT_DENY`) and then fields to state who this applies to - in this simple case a list of `roles` which is checked for in the roles of the user making the request. In this case, a request made for a principal with a role of `user` is granted only `create` and `read` actions whilst an `admin` role can also perform `update`, `delete` actions. The full documentation for resource policies can be found [here](../policies/resource%5Fpolicies.html). ## [](#%5Fwildcard%5Faction)Wildcard action To simplify things further, admins might need to be able to do every action. We can use a special `*` wildcard action to specify this succinctly: ```yaml --- apiVersion: api.cerbos.dev/v1 resourcePolicy: version: "default" resource: "user" rules: - actions: - create - read - update effect: EFFECT_ALLOW roles: - user - actions: - "*" effect: EFFECT_ALLOW roles: - admin ``` The `contact` and `company` resources have a similar structure at this stage and can be modeled as so: ```yaml --- apiVersion: api.cerbos.dev/v1 resourcePolicy: version: "default" resource: "contact" rules: - actions: - create - read - update effect: EFFECT_ALLOW roles: - user - actions: - "*" effect: EFFECT_ALLOW roles: - admin ``` ```yaml --- apiVersion: api.cerbos.dev/v1 resourcePolicy: version: "default" resource: "company" rules: - actions: - create - read - update effect: EFFECT_ALLOW roles: - user - actions: - "*" effect: EFFECT_ALLOW roles: - admin ``` ## [](#%5Fvalidating%5Fpolicies)Validating policies Now with the initial policies in place, you can run Cerbos in compile mode which validates the content of the policy files to ensure they are correct. If you are running Cerbos in a container then mount the folder containing your policies and run the `compile` command pointing to the folder of your policies. ```sh # Using Container docker run --rm --name cerbos -t \ -v /tutorial:/tutorial \ ghcr.io/cerbos/cerbos:latest compile /tutorial/policies # Using Binary ./cerbos compile /tutorial/policies ``` If the policies are valid then the process exits with no errors. If there is an issue, the error message points you to where you need to look and the specific problem to fix. ## [](#%5Fconclusion)Conclusion At this stage, a simple Roles-based Access Control (RBAC) model has been designed and the policies have been validated - next up is making an authorization call to Cerbos. Calling Cerbos ==================== | | The policies for this section can be found [on GitHub](https://github.com/cerbos/cerbos/tree/main/docs/modules/ROOT/examples/tutorial/04-calling-cerbos/cerbos). | | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- | Now that you know the policies are valid, it is time to make your first call to Cerbos to make an authorization check. ## [](#%5Fstarting%5Fcerbos)Starting Cerbos To start you need to launch the server: ```sh # Using Container docker run --rm --name cerbos -t \ -v /tutorial:/tutorial \ -p 3592:3592 ghcr.io/cerbos/cerbos:latest server --config=/tutorial/.cerbos.yaml # Using Binary ./cerbos server --config=/tutorial/.cerbos.yaml ``` Once Cerbos has started up you should see an output confirming that there are 3 policies loaded and ready to start processing authorization checks: ```sh 2024-12-28T13:55:57.043+0600 INFO cerbos.server maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined 2024-12-28T13:55:57.044+0600 INFO cerbos.server Loading configuration from .cerbos.yaml 2024-12-28T13:55:57.045+0600 WARN cerbos.otel Disabling OTLP traces because neither OTEL_EXPORTER_OTLP_ENDPOINT nor OTEL_EXPORTER_OTLP_TRACES_ENDPOINT is defined 2024-12-28T13:55:57.046+0600 INFO cerbos.disk.store Initializing disk store from /Users/username/tutorial/policies 2024-12-28T13:55:57.048+0600 INFO cerbos.index Found 3 executable policies 2024-12-28T13:55:57.048+0600 INFO cerbos.telemetry Telemetry disabled 2024-12-28T13:55:57.048+0600 INFO cerbos.grpc Starting gRPC server at :3593 2024-12-28T13:55:57.050+0600 INFO cerbos.http Starting HTTP server at :3592 ``` At this point how you make a request to the Cerbos instance is down to your preference - a simple cURL command or using a GUI such as Postman also works. ## [](#%5Fcerbos%5Fcheck%5Fcall)Cerbos check call A call to Cerbos contains 3 key bits of information: 1. The Principal - who is making the request 2. The Resources - a map of entities of a resource kind that are they requesting access too 3. The Actions - what actions are they trying to perform on the entities The request payload to the `/api/check/resources` endpoint takes these 3 bits of information as JSON: ```json { "principal": { "id": "user_1", // the user ID "roles": ["user"], // list of roles from user's profile "attr": {} // a map of attributes about the user - not used yet }, "resources": [ // an array of resources being accessed { "actions": ["read"], // the list of actions to be performed on the resource "resource": { // details about the resource "kind": "contact", // the type of the resource "id": "contact_1", // the ID of the specific resource instance "attr": {} // a map of attributes about the resource - not used yet } } ] } ``` To make the actual call as a cURL with the default server config: ```sh curl --location --request POST 'http://localhost:3592/api/check/resources' \ --header 'Content-Type: application/json' \ --data-raw '{ "principal": { "id": "user_1", "roles": ["user"], "attr": {} }, "resources": [ { "actions": ["read"], "resource": { "kind": "contact", "id": "contact_1", "attr": {} } } ] }' ``` The response object looks as follows where for each instance of the resource the authorization decision for each action is either `EFFECT_ALLOW` or `EFFECT_DENY` depending on the policies: ```json { "results": [ { "resource": { "id": "contact_1", "kind": "contact" }, "actions": { "read": "EFFECT_ALLOW" } } ], "cerbosCallId": "49KQ6456PRBLWYMXYDBKZM1F6H" } ``` You can find the Swagger definition of the Cerbos API via going to the root of the Cerbos instance - for example if running on the default port. ## [](#%5Fconclusion)Conclusion Now that you have made the first call to Cerbos you can move on to a way of checking policy logic without having to make individual calls each time by writing unit tests. Testing policies ==================== | | The policies for this section can be found [on GitHub](https://github.com/cerbos/cerbos/tree/main/docs/modules/ROOT/examples/tutorial/05-testing-policies/cerbos). | | --------------------------------------------------------------------------------------------------------------------------------------------------------------------- | Cerbos allows you to write [tests for policies](../policies/compile.html) and run them as part of the compilation stage to make sure that the policies do exactly what you expect. This saves the manual effort of running example requests over and over to ensure the policy logic is as you expect. A test suite defines a number of resources and principals and the expected result of actions for any combination of them. To define a test suite, create a `tests` folder alongside your policy folder. In this folder, any number of tests can be defined as YAML but the file must end with `_test`. As an example, the `contact` policy states that a `user` can create, read and update a contact, but only an `admin` can delete them - therefore you can create a test suite for this like the below: ```yaml --- name: ContactTestSuite description: Tests for verifying the contact resource policy principals: admin: id: admin roles: - admin user: id: user roles: - user resources: contact: kind: contact id: contact tests: - name: Contact CRUD Actions input: principals: - admin - user resources: - contact actions: - create - read - update - delete expected: - principal: admin resource: contact actions: create: EFFECT_ALLOW read: EFFECT_ALLOW update: EFFECT_ALLOW delete: EFFECT_ALLOW - principal: user resource: contact actions: create: EFFECT_ALLOW read: EFFECT_ALLOW update: EFFECT_ALLOW delete: EFFECT_DENY ``` With this defined, you can now extend the compile command to also run the tests for example: ```sh # Using Container docker run --rm --name cerbos -t \ -v /tutorial:/tutorial \ -p 3592:3592 \ ghcr.io/cerbos/cerbos:latest compile --tests=/tutorial/tests /tutorial/policies # Using Binary ./cerbos compile --tests=/tutorial/tests /tutorial/policies ``` If everything is as expected the output of the tests should be green: ```none Test results = ContactTestSuite (contact_test.yaml) == 'Contact CRUD Actions' for resource 'contact_test' by principal 'user' [OK] == 'Contact CRUD Actions' for resource 'contact_test' by principal 'admin' [OK] ``` Full testing documentation can be found [here](../policies/compile.html). Adding conditions ==================== | | The policies for this section can be found [on GitHub](https://github.com/cerbos/cerbos/tree/main/docs/modules/ROOT/examples/tutorial/06-adding-conditions/cerbos). | | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- | In the previous section, an RBAC policy was created that allowed anyone with a `user` role to update a user resource - this isn’t what is intended as it would allow users to update other users' profiles. ```yaml apiVersion: api.cerbos.dev/v1 resourcePolicy: version: "default" resource: "user" rules: - actions: - create - read - update effect: EFFECT_ALLOW roles: - user # ....other conditions ``` This blanket approach is where using pure role-based access controls falls down as there is more nuanced required to meet the requirements. ## [](#%5Fconditions)Conditions Cerbos is a powerful Attribute-based Access Control system that can make contextual decisions at request time whether an action can be taken. In this scenario, Cerbforce’s business logic states that a user can only update their own user profile. To implement this a check needs to be made to ensure the ID of the user making the request matches the ID of the user resource being updated. [Conditions](../policies/conditions.html) in Cerbos are written in [Common Expression Language (CEL)](https://github.com/google/cel-spec/blob/master/doc/intro.md) which is a simple way of defining boolean logic of conditions. In this environment, there are two main bits of data provided that are of interest `request.principal` which is the information about the user making the request and `request.resource` which is the information about the resource being accessed. The data model for each of these is as follows: ```json // request.principal { "id": "somePrinicpalId", // the prinicpal ID "roles": ["user"], // the list of roles from the auth provider "attr": { // a map of attributes about the prinicpal } } // request.resource { "id": "someResourceId", // the resource ID "attr": { // a map of attributes about the resourece } } ``` Using this information a check to see if the principal ID is the same as the ID of the user resource being accessed can be defined as ```none request.resource.id == request.principal.id ``` Adding this to the policy request a new rule to be created that is just for the `update` and `delete` actions which are for the `user` role and has a single condition. ```json --- apiVersion: api.cerbos.dev/v1 resourcePolicy: version: "default" resource: "user" rules: - actions: - create - read effect: EFFECT_ALLOW roles: - user - actions: - update - delete effect: EFFECT_ALLOW roles: - user condition: match: expr: request.resource.id == request.principal.id # ....other conditions ``` Complex logic can be defined in conditions (or sets of conditions) which you can read more about in [the docs](https://docs.cerbos.dev/cerbos/latest/policies/conditions.html). ## [](#%5Fextending%5Ftests)Extending tests Now that you have a conditional policy, you can add these as test cases in the user tests. You can now define multiple `user` resources and principals and create test cases for ensuring the `update` action is allowed when the ID of the principal matches the ID of the resource, as well as checking that it isn’t allowed if the condition is not met. ```yaml --- name: UserTestSuite description: Tests for verifying the user resource policy principals: admin: id: admin roles: - admin user1: id: user1 roles: - user user2: id: user2 roles: - user resources: admin: kind: user id: admin user1: kind: user id: user1 user2: kind: user id: user2 tests: - name: User CRUD Actions input: principals: - admin - user1 - user2 resources: - admin - user1 - user2 actions: - create - read - update - delete expected: - principal: admin resource: admin actions: create: EFFECT_ALLOW read: EFFECT_ALLOW update: EFFECT_ALLOW delete: EFFECT_ALLOW - principal: admin resource: user1 actions: create: EFFECT_ALLOW read: EFFECT_ALLOW update: EFFECT_ALLOW delete: EFFECT_ALLOW - principal: admin resource: user2 actions: create: EFFECT_ALLOW read: EFFECT_ALLOW update: EFFECT_ALLOW delete: EFFECT_ALLOW - principal: user1 resource: admin actions: create: EFFECT_ALLOW read: EFFECT_ALLOW update: EFFECT_DENY delete: EFFECT_DENY - principal: user1 resource: user1 actions: create: EFFECT_ALLOW read: EFFECT_ALLOW update: EFFECT_ALLOW delete: EFFECT_ALLOW - principal: user1 resource: user2 actions: create: EFFECT_ALLOW read: EFFECT_ALLOW update: EFFECT_DENY delete: EFFECT_DENY - principal: user2 resource: admin actions: create: EFFECT_ALLOW read: EFFECT_ALLOW update: EFFECT_DENY delete: EFFECT_DENY - principal: user2 resource: user1 actions: create: EFFECT_ALLOW read: EFFECT_ALLOW update: EFFECT_DENY delete: EFFECT_DENY - principal: user2 resource: user2 actions: create: EFFECT_ALLOW read: EFFECT_ALLOW update: EFFECT_ALLOW delete: EFFECT_ALLOW ``` Derived roles ==================== | | The policies for this section can be found [on GitHub](https://github.com/cerbos/cerbos/tree/main/docs/modules/ROOT/examples/tutorial/07-derived-roles/cerbos). | | ------------------------------------------------------------------------------------------------------------------------------------------------------------------ | The business requirements for Cerbforce state that only an owner of Contacts and Companies are allowed to delete them from the system. With Cerbos, the aim is to keep policies as simple as possible and not repeat logic across different resources, so in this situation, a [Derived Role](../policies/derived%5Froles.html) can enable help. Derived roles are a way of augmenting the broad roles with are attached to the user in the directory of the authentication system with contextual data to provide more fine-grained control at runtime. On every request, all the relevant derived role policies are evaluated and those matching roles are 'attached' to the user as Cerbos computes access. ## [](#%5Fowner%5Fderived%5Frole)Owner derived role In the Cerbforce data model, the `contact` and `company` both have an attribute called `ownerId` which is the ID of the user that created the record. Rather than adding a condition to both of these resource policies, you are going to create a derived role that gives the principal an additional `owner` role within the context of the request. The policy for this is as follows: ```yaml --- apiVersion: "api.cerbos.dev/v1" description: |- Common dynamic roles used within the Cerbforce app derivedRoles: name: cerbforce_derived_roles definitions: - name: owner parentRoles: ["user"] condition: match: expr: request.resource.attr.ownerId == request.principal.id ``` The structure is similar to a resource policy but rather than defining actions with conditions, it defines roles that are an extension of the listed `parentRoles` and can have any number of conditions as with resources. With this derived role policy setup a resource can import them and then make use of them in rules eg: ```yaml --- apiVersion: api.cerbos.dev/v1 resourcePolicy: version: "default" resource: "contact" importDerivedRoles: - cerbforce_derived_roles rules: - actions: - create - read effect: EFFECT_ALLOW roles: - user - actions: - update - delete effect: EFFECT_ALLOW derivedRoles: - owner - actions: - "*" effect: EFFECT_ALLOW roles: - admin ``` Full documentation can be found [here](../policies/derived%5Froles.html). Principal policies ==================== | | The policies for this section can be found [on GitHub](https://github.com/cerbos/cerbos/tree/main/docs/modules/ROOT/examples/tutorial/08-principal-policies/cerbos). | | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------- | The final type of policy that Cerbos supports is a [principal policy](../policies/principal%5Fpolicies.html) which is a special type that allows user-specific overrides to be defined. In the case of Cerbforce there is a Data Protection Officer (DPO) that handles any data deletion requests. By default, they would not have any delete access to contacts unless they were the owner of the record or have the `admin` role. To overcome this a principal policy has been created which targets their userId and overrides this for the delete action on a contact resource: ```yaml --- apiVersion: "api.cerbos.dev/v1" principalPolicy: version: "default" principal: "dpo1" rules: - resource: contact actions: - name: contact_delete action: "delete" effect: EFFECT_ALLOW ``` With this policy in place, when an authorization check is made with the principal ID of `dpo1` the delete action on a `contact` resource is overridden to be allowed. Full documentation can be found [here](../policies/principal%5Fpolicies.html). Attribute schema ==================== | | The policies for this section can be found [on GitHub](https://github.com/cerbos/cerbos/tree/main/docs/modules/ROOT/examples/tutorial/09-attribute-schema/cerbos). | | --------------------------------------------------------------------------------------------------------------------------------------------------------------------- | An additional check bit of business logic has been introduced for the `contact` resource which requires the `active` attribute of a contact to be set to `True` to be able to `update` or `delete` it. This is so that old contacts are kept for reporting purposes and can’t be accidentally deleted or updated. This now means there are two attributes of a `contact` resource that are now required for the policies to be computed - `ownerId` and `active`. If either of these is not included in the request to check permissions the result would not be as expected (defaulting to `EFFECT_DENY`). To prevent this mistake, it is possible to define a [schema](../policies/schemas.html) for the attributes of a principal and resources which Cerbos validates against at request time to ensure all fields are provided as expected. ## [](#%5Fdefining%5Fschema)Defining schema [Attribute schema](../policies/schemas.html) are defined in [JSON Schema (draft 2020-12)](https://json-schema.org/specification.html) and stored in a special `_schemas` sub-directory along side the policies For the contact resource the schema looks like the following: ```json { "$schema": "https://json-schema.org/draft/2020-12/schema", "type": "object", "properties": { "ownerId": { "type": "string" }, "active": { "type": "boolean" } }, "required": ["ownerId", "active"] } ``` Once defined, it is then linked to the resource via adding a reference in the policy: ```yaml --- apiVersion: api.cerbos.dev/v1 resourcePolicy: version: "default" resource: "contact" importDerivedRoles: - cerbforce_derived_roles rules: - actions: - create - read effect: EFFECT_ALLOW roles: - user - actions: - update - delete effect: EFFECT_ALLOW derivedRoles: - owner condition: match: expr: request.resource.attr.active == true - actions: - "*" effect: EFFECT_ALLOW roles: - admin schemas: resourceSchema: ref: cerbos:///contact.json ``` The same can be done with attributes of a principal - you can find out more in [the documentation](../policies/schemas.html). ## [](#%5Fenforcing%5Fschema)Enforcing schema Validating the request against the schema is done at request time by the server - to enable this a [new schema configuration block](../configuration/schema.html) needs adding to the `.cerbos.yaml`. ```yaml schema: enforcement: reject ``` With this now in place, any request that is made to check authorization of a `contact` resource is rejected if the attributes are not provided or of the wrong type: _Request_ ```json { "principal": { "id": "user_1", "roles": ["user"], "attr": {} }, "resource": { "kind": "contact", "instances": { "contact_1": { "attr": { "ownerId": "user1" } } } }, "actions": ["read"] } ``` _Response_ ```json { "resourceInstances": { "contact_1": { "actions": { "read": "EFFECT_DENY" }, "validationErrors": [ { "message": "missing properties: 'active'", "source": "SOURCE_RESOURCE" } ] } } } ``` Integrating Cerbos ==================== With the policies now defined the authorization logic inside the app can be replaced with a call out to a running Cerbos instance. Cerbos has SDKs available for Go, Java, .NET, Node, PHP, Python, Ruby, and Rust. Documentation for these and other examples can be found [here](../api/index.html). Tutorial: Writing policies for a simple photo-sharing service ==================== Getting started * We will use Docker to run the server and the compiler. ```shell docker pull ghcr.io/cerbos/cerbos:0.45.1 ``` * Create a file named `.cerbos.yaml` with the following contents: ```yaml --- server: httpListenAddr: ":3592" storage: driver: "disk" disk: directory: /photo-share/policies ``` * Create a directory named `policies` to hold the policies. | | You can find all the policies and tests used in this tutorial at . | | --------------------------------------------------------------------------------------------------------------------- | ## [](#%5Fthe%5Fapatr%5Fapplication)The Apatr application Apatr is a simple photo-sharing service that allows users to upload their photos and optionally share them with the rest of the world. Users sign-up to the service either by creating their own user account on the website or by signing-in with an identity provider (IDP) like Google or Facebook. Apatr uses a third-party identity management tool to manage these accounts and authenticating users to the site. Once they are logged-in, users can do the following: * Create albums to organize their photos * Upload photos to albums * Share albums or individual photos with other Apatr users * Share albums or individual photos with the internet Apatr also employs a team of moderators to investigate complaints and remove any illegal or offensive items from the site. To respect users privacy, moderators are only allowed to view photos or albums that are public or those that have been flagged as inappropriate by another user. Apatr’s identity provider allows defining roles for users. The roles currently defined in this system are: * `moderator`: Member of the moderator team * `user`: Authenticated users ## [](#%5Fresources%5Fand%5Factions)Resources and actions In the Apatr application, the most obvious resource hierarchy is the following: * Album * Photo * Caption * Comment * Description * User Profile __Album permissions matrix__ | Resource | Action | Allowed role | Condition | | ---------------- | --------------------------------------------------------------- | --------------------------------------------- | --------- | | **album:object** | create | user | | | delete | user | If user owns the album | | | moderator | If the album is flagged as inappropriate | | | | share | user | If user owns the album | | | unshare | user | If user owns the album | | | view | user | If user owns the album If the album is public | | | moderator | If the album is flagged as inappropriate If the album is public | | | | flag | user | If the album is public | | ## [](#%5Fderived%5Froles)Derived roles There are some recurring themes in the above permissions matrix. * People who have the `user` role can be either owners or viewers depending on the resource they are trying to access * Moderators get extra capabilities when the content is flagged as inappropriate These capabilities are determined based on contextual information. Let’s codify them so that they can be reused. ```yaml --- apiVersion: "api.cerbos.dev/v1" description: |- Common dynamic roles used within the Apatr app derivedRoles: name: apatr_common_roles (1) definitions: - name: owner (2) parentRoles: ["user"] (3) condition: match: expr: request.resource.attr.owner == request.principal.id (4) - name: abuse_moderator parentRoles: ["moderator"] condition: match: expr: request.resource.attr.flagged == true ``` | **1** | Name that we will use to import this set of roles | | ----- | ------------------------------------------------------------------------------------------ | | **2** | Descriptive name for this derived role | | **3** | The static roles (from the identity provider) to which this derived role applies to | | **4** | An expression that is applied to the request to determine when this role becomes activated | Save the above definition as `apatr_common_roles.yaml` in the `policies` directory. Run the compiler to make sure that the contents of the file are valid. ```shell docker run -it -v $(pwd):/photo-share ghcr.io/cerbos/cerbos:0.45.1 \ compile /photo-share/policies ``` ## [](#%5Fresource%5Fpolicies)Resource policies Let’s write a resource policy for the `album:object` resource. ```yaml --- apiVersion: api.cerbos.dev/v1 resourcePolicy: version: "default" (1) importDerivedRoles: - apatr_common_roles (2) resource: "album:object" rules: - actions: ['*'] effect: EFFECT_ALLOW derivedRoles: - owner - actions: ['view', 'flag'] effect: EFFECT_ALLOW roles: - user condition: match: expr: request.resource.attr.public == true - actions: ['view', 'delete'] effect: EFFECT_ALLOW derivedRoles: - abuse_moderator ``` | **1** | You can have multiple policy versions for the same resource (e.g. production vs. staging). If the request does not explicitly specify the version, the default policy takes effect. | | ----- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | **2** | Import the roles we defined earlier | Save the above policy definition as `resource_album_object.yaml` inside the `policies` directory. Run the compiler to make sure that the contents of the policies directory are valid. ```shell docker run -it -v $(pwd):/photo-share ghcr.io/cerbos/cerbos:0.45.1 compile /photo-share/policies ``` Let’s start the server and try out a request. ```shell docker run -it -v $(pwd):/photo-share -p 3592:3592 ghcr.io/cerbos/cerbos:0.45.1 \ server --config=/photo-share/.cerbos.yaml ``` | | If you like to use [Postman](https://www.postman.com), [Insomnia](https://insomnia.rest) or any other software that supports OpenAPI, the Cerbos OpenAPI definitions can be downloaded by accessing . | | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | Alicia trying to view her own album ```shell cat < for an example of using Cerbos GitHub Actions in a CI workflow to compile and test policies. | | ---------------------------------------------------------------------------------------------------------------------------------------------------- | ## [](#%5Fusing%5Fschemas%5Fto%5Fenforce%5Ftype%5Fsafety%5Foptional)Using schemas to enforce type safety \[Optional\] The derived roles and resource policy rules we defined above rely on certain attributes being present in the `attr` sections of the incoming request. To ensure that API requests are strictly-typed and contain required attributes, we can define schemas for the principal and resource attributes sections. Create a new directory named `_schemas` inside the `policies` directory. ```sh mkdir policies/_schemas ``` Let’s add a JSON schema defining the data types and required fields for `album:object` resources. Create a file named `album_object.json` inside the `policies/_schemas` directory with the following contents: ```json { "$schema": "https://json-schema.org/draft/2020-12/schema", "type": "object", "properties": { "owner": { "type": "string" }, "public": { "type": "boolean" }, "flagged": { "type": "boolean" } }, "required": [ "owner" ] } ``` Now update `policies/resource_album_object.yaml` to add the reference to the schema: ```yaml --- apiVersion: api.cerbos.dev/v1 resourcePolicy: version: "default" importDerivedRoles: - apatr_common_roles resource: "album:object" rules: - actions: ['*'] effect: EFFECT_ALLOW derivedRoles: - owner - actions: ['view', 'flag'] effect: EFFECT_ALLOW roles: - user condition: match: expr: request.resource.attr.public == true - actions: ['view', 'delete'] effect: EFFECT_ALLOW derivedRoles: - abuse_moderator schemas: resourceSchema: ref: cerbos:///album_object.json (1) ``` | **1** | Defines the schema to use for validating resource attributes | | ----- | ------------------------------------------------------------ | Update `.cerbos.yaml` to enable schema enforcement. ```yaml --- server: httpListenAddr: ":3592" storage: driver: "disk" disk: directory: /photo-share/policies schema: enforcement: reject ``` Now start the server again and send a request that does not conform to the schema. The server response should contain a list of validation errors. ```shell docker run -it -v $(pwd):/photo-share -p 3592:3592 ghcr.io/cerbos/cerbos:0.45.1 \ server --config=/photo-share/.cerbos.yaml ``` Invalid request ```shell cat < * Cerbos * Node App * Postgres DB for FustionAuth on port `5432` ### [](#%5Fconfigure%5Ffusionauth)Configure FusionAuth This example is based off the[FusionAuth 5 Minute Guide](https://fusionauth.io/docs/v1/tech/5-minute-setup-guide/) \- and most of the steps have bee handled by the`docker compose` setup. The only manual steps required are creating the application. To do this, open up and complete the setup wizard, then: Once we arrive in the FusionAuth admin UI, the first thing we need to do is create an Application. An Application is something that a user can log into. This is the application we are building or that we are migrating to use FusionAuth. We’ll click the Application menu option on the left side of the page or the Setup button in the box at the top of the page. ![fusionauth dashboard applications](../../_images/fusionauth-dashboard-applications.png) This will take us to the listing page for Applications. Next, we’ll click the green plus button (the add button) at the top of the page: ![fusionauth application listing](../../_images/fusionauth-application-listing.png) On the Application form, we’ll need to provide a name for our Application (only used for display purposes) and a couple of items on the OAuth tab. We’ll start with a simple setup that allows existing users to log into your application. Therefore, we won’t need to define any roles or registration configuration. If we click on the OAuth tab, we’ll see these options: ![fusionauth application form](../../_images/fusionauth-application-form.png) Most of the defaults will work, but we also need to provide these items: * An authorized redirect URL. This is the route/controller in our application’s backend that will complete the OAuth workflow. This is also known as the 'Backend for Frontend' or BFF pattern, and is a lightweight proxy. In our example, we set this to``. We’ll show some Node.js example code below for this route. * Optionally, we can specify a valid Logout URL. This is where the user will be redirected to after they are logged out of FusionAuth’s OAuth front-end: our application. * We need to ensure that the Authorization Code grant is selected in the Enabled Grants. Next we need to add the roles that will be used by our policies. Back on the application listing page press the 'Manage Roles' button next to our application and add roles for `user` and `editor` (admin should already exist). These roles will be passed back with the user information to our application, and then passed onto Cerbos for use in authorization decisions. ![fusionauth add roles](../../_images/fusionauth-add-roles.png) Once we have all of this configured, we can then copy the Client ID and Client Secret and move to the next step. ### [](#%5Fconfigure%5Fnode%5Fapp)Configure Node App Now that our application has been created, we need to add the Client ID and Client Secret from FusionAuth into the top of `app/index.js` (line 12 & 13). These will be used to identify the app through the login flow. ### [](#%5Ftest%5Fthe%5Fapp)Test the app Now that everything is wired up you should be able to goto and press the login link to authenticate with your FusionAuth account. ## [](#%5Fpolicies)Policies This example has a simple CRUD policy in place for a resource kind of`contact` \- like a CRM system would have. The policy file can be found in the `cerbos/policies` folder[here](https://github.com/cerbos/express-fusionauth-cerbos/blob/main/cerbos/policies/contact.yaml). Should you wish to experiment with this policy, you can try it in the[Cerbos Playground](https://play.cerbos.dev/p/g561543292ospj7w0zOrFx7H5DzhmLu2). The policy expects one of two roles to be set on the principal - `admin`and `user`. These roles are authorized as follows: | Action | User | Admin | | ------ | -------- | ----- | | list | Y | Y | | read | Y | Y | | create | Y | Y | | update | If owner | Y | | delete | If owner | Y | ## [](#%5Frequest%5Fflow)Request Flow 1. User access the application and clicks `Login` 2. User is directed to the FusionAuth UI and authenticates 3. A token is returned back in the redirect URL to the application 4. That token is then exchanged for the user profile information 5. The user profile from FusionAuth being stored (user Id, email, roles etc). 6. Any requests to the `/contacts` endpoints fetch the data required about the resource being accessed from the data store 7. Call the Cerbos PDP with the principal, resource and action to check the authorization and then return an error if the user is not authorized. The [Cerbos package](https://www.npmjs.com/package/cerbos) is used for this. ```javascript --- const allowed = await cerbos.check({ principal: { //pass in the Okta user ID and groups id: req.userContext.userinfo.sub, roles: req.userContext.userinfo.groups, }, resource: { kind: "contact", instances: { //a map of the resource(s) being accessed [contact.id]: { attr: contact, }, }, }, actions: ["read"], //the list of actions being performed }); ``` if (!allowed.isAuthorized(contact.id, "read")) { return res.status(403).json({ error: "Unauthorized" }); } --- Implementation at this stage will be dependant on your business requirements. Tutorial: Using Cerbos with JWT ==================== An example application of integrating [Cerbos](https://cerbos.dev) with an[Express](https://expressjs.com/) server using [JSON Web Tokens](https://jwt.io/) \- via [express-jwt](https://github.com/auth0/express-jwt) \- for authentication. ## [](#%5Fdependencies)Dependencies * Node.js * Docker for running the [Cerbos Policy Decision Point (PDP)](../../../installation/container.html) ## [](#%5Fgetting%5Fstarted)Getting started 1. Clone the repo ```bash git clone git@github.com:cerbos/express-jwt-cerbos.git ``` 2. Start up the Cerbos PDP instance docker container. This will be called by the express app to check authorization. ```bash cd cerbos ./start.sh ``` 3. Install node dependencies ```bash npm install ``` 4. Start the express server ```bash npm run start ``` ## [](#%5Fpolicies)Policies This example has a simple CRUD policy in place for a resource kind of`contact` \- like a CRM system would have. The policy file can be found in the `cerbos/policies` folder[here](https://github.com/cerbos/express-jwt-cerbos/blob/main/cerbos/policies/contact.yaml). Should you wish to experiment with this policy, you can try it in the[Cerbos Playground](https://play.cerbos.dev/p/sZC611cf06deexP0q8CTcVufTVau1SA3). The policy expects one of two roles to be set on the principal - `admin`and `user`. These roles are authorized as follows: | Action | User | Admin | | ------ | ---- | ----- | | list | Y | Y | | read | Y | Y | | create | N | Y | | update | N | Y | | delete | N | Y | This business logic is represented in Cerbos as a resource policy. ```yaml apiVersion: api.cerbos.dev/v1 resourcePolicy: version: default resource: contact rules: - actions: ["read", "list"] roles: - admin - user effect: EFFECT_ALLOW - actions: ["create", "update", "delete"] roles: - admin effect: EFFECT_ALLOW ``` ## [](#%5Fjwt%5Fstructure)JWT Structure For this example a JWT needs to be generated to be passed in the authorization header. The payload of the token contains an array of roles which are passed into Cerbos to use for authorization - the structure is as follows: { sub: string, name: string, iat: number, roles: string[] // "user" and "admin" supported in this demo } [JWT.io](https://jwt.io) can be used generate a token for testing purposes - an[example is here](https://jwt.io/#debugger-io?token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwicm9sZXMiOlsiYWRtaW4iXSwiaWF0IjoxNTE2MjM5MDIyfQ.CQEEaSdswE2tou7MUeSe4-6kfe1imJXnbqhiMFsF13A). **Note:** The secret is hardcoded in this example to `yoursecret` and the algorithm is `HS256` \- you will need to set these for the signature to be valid. ![JWT](../../_images/jwt-token.png) ## [](#%5Frequest%5Fflow)Request Flow 1. HTTP request comes in and the `express-jwt` library validates the token and adds the payload to `req.user`. 2. The contents of the JWT token is mapped to the structure of the principal object required by Cerbos ```js // Extract data from the JWT (check DB etc) and create the principal object to be sent to Cerbos const jwtToPrincipal = ({ sub, iat, roles = [], ...rest }) => { return { id: sub, roles, attr: rest, }; }; ``` 1. Fetch the data required about the resource being accessed from the data store 2. Call the Cerbos PDP with the principal, resource and action to check the authorization and then return an error if the user is not authorized. The [Cerbos package](https://www.npmjs.com/package/cerbos) is used for this. ```js const allowed = await cerbos.check({ principal: jwtToPrincipal(req.user), resource: { kind: "contact", instances: { //a map of the resource(s) being accessed [contact.id]: { attr: contact, }, }, }, actions: ["read"], //the list of actions being performed }); // not authorized for read action if (!allowed.isAuthorized(contact.id, "read")) { return res.status(403).json({ error: "Unauthorized" }); } ``` 1. Serve the response if authorized ## [](#%5Fexample%5Frequests)Example Requests Once a JWT token has been generated requests can be made to the express server. ### [](#%5Flist%5Fcontacts)List contacts Allowed for `user` and `admin` roles ```bash curl -X GET 'http://localhost:3000/contacts' \ --header 'Authorization: Bearer ' ``` ### [](#%5Fget%5Fa%5Fcontact)Get a contact Allowed for `user` and `admin` roles ```bash curl -X GET 'http://localhost:3000/contacts/abc123' \ --header 'Authorization: Bearer ' ``` ### [](#%5Fcreate%5Fa%5Fcontact)Create a contact Allowed for `admin` role only ```bash curl -X POST 'http://localhost:3000/contacts/new' \ --header 'Authorization: Bearer ' ``` Should this request be made with the JWT roles set to `["admin"]` the response will be" ```json { "result": "Created contact" } ``` Should this request be made with the JWT roles set to `["user"]` the response will be: ```json { "error": "Unauthorized" } ``` ### [](#%5Fupdate%5Fa%5Fcontact)Update a contact Allowed for `admin` role only ```bash curl -X PATCH 'http://localhost:3000/contacts/abc123' \ --header 'Authorization: Bearer ' ``` Should this request be made with the JWT roles set to `["admin"]` the response will be" ```json { "result": "Contact updated" } ``` Should this request be made with the JWT roles set to `["user"]` the response will be: ```json { "error": "Unauthorized" } ``` ### [](#%5Fdelete%5Fa%5Fcontact)Delete a contact Allowed for `admin` role only ```bash curl -X DELETE 'http://localhost:3000/contacts/abc123' \ --header 'Authorization: Bearer ' ``` Should this request be made with the JWT roles set to `["admin"]` the response will be" ```json { "result": "Contact deleted" } ``` Should this request be made with the JWT roles set to `["user"]` the response will be: ```json { "error": "Unauthorized" } ``` Tutorial: Using Cerbos with Magic ==================== The demise of passwords has [long been predicted](https://www.forbes.com/sites/forbestechcouncil/2020/03/06/the-inevitable-death-of-passwords/) due to the ongoing leaks, hacks and breaches in recent years. There has been a lot of innovation in this space and [Magic](https://magic.link) has become a leader with their novel approach to eradicating the need to store passwords at all by making use of ‘magic links’ which are sent via the provided email address and log you in to the site once clicked. > Magic provides a key-based identity solution built on top of the Decentralized Identity (DID) standard, where users’ identities are self-sovereign by leveraging blockchain public-private key pairs. These keypairs are used to generate zero-knowledge proofs to authenticate users instead of having to rely on users providing passwords to Magic or any identity provider — Magic Whitepaper https://www.dropbox.com/s/3flqaszoigwis5b/Magic%20Whitepaper.pdf This approach is a great way to securely establish a user’s identity without running authentication infrastructure. At this point, you can use this identity to fetch extra data such as roles and group memberships about the user from your directory, profile store or other database system to further add context about the user. The exact mechanism for this is out of the scope of this article, but Active Directory, LDAP or just a plain old DB are all good places to store this extra user information. Once a user has authenticated (and the profile enriched with profile information) the next step is to establish what the user has permissions to do in the application - this is where Cerbos steps in and through it’s policy based approach can do context-aware authorization using the user (or principal in Cerbos speak) from Magic. ## [](#%5Fexample%5Fimplementation%5Fwith%5Fcerbos)Example Implementation with Cerbos Implementing this requires passing the token provided from the Magic Client SDK to your backend code and then verifying it with the Magic Admin SDK. As an example, we have forked Magic’s Node/Express/Passport demo repo and added in calls to Cerbos to demonstrate how the two systems can work together - you can find a live [demo here](https://demo-magiclink.cerbos.cloud/) and [view source code on GitHub](https://github.com/cerbos/express-magiclink-cerbos). The logical data flow for how this is implemented is as follows: 1. User visits site and and enters their email address 2. Magic send an email to that address with a link which authenticates the user 3. The website gets a call back when a user clicks the link from the email with the authenticated identity and token client side 4. Calls to the authenticated endpoint can now be made with the token passed as a Bearer token which is parsed by Passport.js’s Magic integration 5. App code fetches the resource being accessed from the data store 6. App sends the user information from the verified Magic token along with the resource and desired actions to the Cerbos PDP instance 7. Cerbos PDP evaluates the policies and returns an ALLOW or DENY 8. App conditionally returns based on the authorization response The key part of this are Steps #4-7 where the context about the principal and the resource is gathered and sent to Cerbos to determine the authorization. In this stage all the attributes about the resource and the user can now be used to make a decision. ## [](#%5Fconclusion)Conclusion Magic’s approach to passwordless authentication and identity is a game changer in how to secure your application, and when paired with Cerbos for authorization, it is possible to deploy context-aware access controls without complex rules or token-bloat. Tutorial: Using Cerbos with Okta ==================== An example application of integrating [Cerbos](https://cerbos.dev) with an [Express](https://expressjs.com/) server using [Okta](https://okta.com/) for authentication. ## [](#%5Fdependencies)Dependencies * Node.js * An [Okta](https://okta.com/) account --- For simplicity this demo is using the hosted Cerbos Demo PDP avaliable in the Playground so running the Cerbos container locally isn’t required. For production use cases a deployed Cerbos PDP is required and the code updated to point to your instance. You can read more about the deployment options [here](https://docs.cerbos.dev/cerbos/latest/deployment/index.html). --- ## [](#%5Fsetup)Setup ### [](#%5Finstall%5Fdeps)Install Deps 1. Clone the repo ```bash git clone git@github.com:cerbos/express-okta-cerbos.git ``` ### [](#%5Fcreate%5Fan%5Fokta%5Fapplication)Create an Okta Application In your Okta instance you need to create a new application. For this example we will be making use of Okta’s ExpressOIDC package so the application’s sign-in method needs to be `OIDC - OpenID Connect` and the application type is `Web Application`. ![Okta Create App](../../_images/okta-create-app.png) ### [](#%5Fset%5Fredirect%5Furls)Set Redirect URLs The default redirect URLs for sign-in and sign-out are correct if you are running this demo app on the default 8080 port. If you have chanaged this in your `.env` file then you will need to update accordingly. ![Okta App Settings](../../_images/okta-app-settings.png) ### [](#%5Fenabling%5Fgroups%5Fin%5Fthe%5Fokta%5Ftoken)Enabling Groups in the Okta Token By default the groups the user belongs to are not passed to the application in the Okta token - this needs enabling as these groups will be passed from Okta to Cerbos for use in authorization decisions. To do this, goto _Security > API_ in the sidebar, and edit the default_Authorization Server_. On this page, got the _Claims_ tab and press _Add Claim_. Add a new claim called groups which includes the groups of the user in the ID token. ![Okta Groups Claim](../../_images/okta-groups-claim.png) > In production you will likely want to filter this down, but for this example we are enabling all groups to be added to the token. ### [](#%5Fcreate%5Fan%5Fexample%5Fadmin%5Fgroup)Create an example `admin` group. In a new Okta account the only group that exists is the _Everyone_group. For our demo application policies we expect users to be in`admin` or `user` group as this is what is checked. Under _Directory > Groups_ press _Add Group_ and create the two groups and add your example users to them. ### [](#%5Fsetup%5Fenvironment%5Fvariables)Setup Environment Variables Make a copy of the `.env.sample` file and call it `.env`. You will then need to populate the feilds that begin with `OKTA_` with the information provided in the new application you created. PORT=8080 CERBOS_HOSTNAME=https://demo-pdp.cerbos.cloud CERBOS_PLAYGROUND=ygW612cc9c9xXOsOZjI40ovY2LZvXf43 OKTA_DOMAIN= OKTA_CLIENTID= OKTA_CLIENTSECRET= OKTA_APP_BASE_URL=http://localhost:8080 > This example is using the hosted Demo PDP of Cerbos and an example Playground instance. If you are running your own Cerbos PDP then update the `CERBOS_HOSTNAME` feild to your own instance and remove the`CERBOS_PLAYGROUND` feild. ### [](#%5Ftest%5Fthe%5Fapp)Test the app Now that everything is wired up you should be able to goto and press the login link to authenticate with your Okta account. ## [](#%5Fpolicies)Policies This example has a simple CRUD policy in place for a resource kind of`contact` \- like a CRM system would have. Should you wish to experiment with this policy, you can try it in the[Cerbos Playground](https://play.cerbos.dev/p/g561543292ospj7w0zOrFx7H5DzhmLu2). The policy expects one of two roles to be set on the principal - `admin`and `user`. These roles are authorized as follows: | Action | User | Admin | | ------ | ---- | ----- | | list | Y | Y | | read | Y | Y | | create | N | Y | | update | N | Y | | delete | N | Y | ## [](#%5Frequest%5Fflow)Request Flow 1. User access the application and clicks Login 2. User is directed to the Okta UI and authenticates 3. A token is returned back in the redirect URL to the application 4. That token is then exchanged for the user profile information 5. The user profile from Okta being stored (user Id, roles etc). 6. Any requests to the /contacts endpoints fetch the data required about the resource being accessed from the data store 7. Call the Cerbos PDP with the principal, resource and action to check the authorization and then return an error if the user is not authorized. The Cerbos package is used for this. ```javascript --- const allowed = await cerbos.check({ principal: { //pass in the Okta user ID and groups id: req.userContext.userinfo.sub, roles: req.userContext.userinfo.groups, }, resource: { kind: "contact", instances: { //a map of the resource(s) being accessed [contact.id]: { attr: contact, }, }, }, actions: ["read"], //the list of actions being performed }); ``` if (!allowed.isAuthorized(contact.id, "read")) { return res.status(403).json({ error: "Unauthorized" }); } --- Implementation at this stage will be dependant on your business requirements. Tutorial: Using Cerbos with Prisma for fine-grained authorization ==================== [Prisma](https://prisma.io) is a powerful ORM for modern Node.js applications. The Cerbos Prisma query plan adapter converts Cerbos [query plan](../../../api/index.html#resources-query-plan) responses into Prisma queries. This article covers setting up a basic CRM web application using Prisma for data storage and Cerbos for authorization to create, read, update and delete contacts based on who the user is. Our business requirements for who can do what are as follows: * Admins can do all actions * Users in the Sales department can read and create contacts * Only the user who created the contact can update and delete it The last point is an important one as the authorization decision requires context of what is being accessed to make the decision if an action can be performed. Note that whilst authentication is out of scope of this article, Cerbos is compatible with any authentication system - be it basic auth, JWT or a service like [Auth0](https://auth0.com). You can find the GitHub repo for this tutorial [here](https://github.com/cerbos/express-prisma-cerbos/). ## [](#%5Fsetting%5Fup%5Fprisma)Setting up Prisma To get started, we need to install our various dependencies. Copy and run the following: ```bash mkdir express-prisma-cerbos cd express-prisma-cerbos cat << EOF > package.json { "prisma": { "seed": "ts-node prisma/seed.ts" } } EOF npm i express @cerbos/grpc @prisma/client && npm i --save-dev @types/express ts-node ``` For this simplified tutorial, we will use a simple Prisma model to represent a CRM contact. We’ll also opt to use a SQLite database, but this can be swapped out to your DB of choice. You can find the Prisma documentation [here](https://www.prisma.io/docs/) for more details. Create a `prisma` folder and add the basic Prisma schema to `prisma/schema.prisma`, by copying and running the following: ```none mkdir prisma && cat << EOF > prisma/schema.prisma // This is your Prisma schema file, // learn more about it in the docs: https://pris.ly/d/prisma-schema datasource db { provider = "sqlite" url = "file:./dev.db" } generator client { provider = "prisma-client-js" } model Contact { id String @id @default(cuid()) createdAt DateTime @default(now()) updatedAt DateTime @updatedAt firstName String lastName String ownerId String active Boolean @default(false) marketingOptIn Boolean @default(false) } EOF ``` Next, we define the seed script which will be used to populate the database with the following contacts: | ID | First Name | Marketing Opt-In | Active | Owner ID | | -- | ---------- | ---------------- | ------ | -------- | | 1 | Nick | Yes | Yes | 1 | | 2 | Simon | Yes | No | 1 | | 3 | Mary | No | Yes | 1 | | 4 | Christina | Yes | No | 2 | | 5 | Aleks | Yes | Yes | 2 | Run the following to generate the script: ```bash cat << EOF > prisma/seed.ts import { PrismaClient } from "@prisma/client"; const prisma = new PrismaClient(); const contactData = [ { id: "1", firstName: "Nick", lastName: "Smyth", marketingOptIn: true, active: true, ownerId: "1", }, { id: "2", firstName: "Simon", lastName: "Jaff", marketingOptIn: true, active: false, ownerId: "1", }, { id: "3", firstName: "Mary", lastName: "Jane", active: true, marketingOptIn: false, ownerId: "1", }, { id: "4", firstName: "Christina", lastName: "Baker", marketingOptIn: true, active: false, ownerId: "2", }, { id: "5", firstName: "Aleks", lastName: "Kozlov", marketingOptIn: true, active: true, ownerId: "2", } ]; async function main() { console.log("Start seeding ..."); for (const c of contactData) { const contact = await prisma.contact.create({ data: c, }); console.log("Created contact with id: " + contact.id); } console.log("Seeding finished."); } main() .catch((e) => { console.error(e); process.exit(1); }) .finally(async () => { await prisma.\$disconnect(); }); EOF ``` Now, to initialize our DB, generate the Prisma client and seed the database, run the following: ```bash npx prisma migrate dev --name init ``` ## [](#%5Fcreating%5Fan%5Faccess%5Fpolicy)Creating an access policy | | We will be using a Docker container to run the Cerbos PDP instance, so ensure that you have [Docker](https://docs.docker.com/get-docker/) set up first! | | ---------------------------------------------------------------------------------------------------------------------------------------------------------- | The first step is to create a resource policy file. Our requirements, as a reminder, were: * Admins can do all actions * Users in the Sales department can read and create contacts * Only the user who created the contact can update and delete it * A resource policy file called ‘contacts.yaml’ should be created in the policies folder with the following: Let’s create a `cerbos` directory ([see repo](https://github.com/cerbos/express-prisma-cerbos/tree/main/cerbos)) with a subdirectory; `policies`, and a file `contacts.yaml` inside there. To do this, run the following: ```yaml mkdir -p cerbos/policies && cat << EOF > cerbos/policies/contacts.yaml --- apiVersion: api.cerbos.dev/v1 resourcePolicy: version: default resource: contact rules: # Admins can do all actions - actions: ["*"] effect: EFFECT_ALLOW roles: - admin # Users in the Sales department can read and create contacts - actions: ["read", "create"] effect: EFFECT_ALLOW roles: - user condition: match: expr: request.principal.attr.department == "Sales" # Only the user who created the contact can update and delete it - actions: ["update", "delete"] effect: EFFECT_ALLOW roles: - user condition: match: expr: request.resource.attr.ownerId == request.principal.id EOF ``` [Conditions](../../../policies/conditions.html) are the powerful part of Cerbos which enables authorization decisions to be made at request time using context from both the principal (the user) and the resource they are trying to access. In this policy we are using conditions to check the department of the user for read and create actions, then again in the update and delete policy to check that the owner of the resource is the principal making the request. As you are working on the policies, you can run the following to check that they are valid. If no errors are logged then you are good to go. ```bash cd cerbos docker run -i -t -p 3592:3592 \ -v $(pwd)/policies:/policies \ ghcr.io/cerbos/cerbos:0.45.1 \ compile /policies ``` Now let’s fire up the Cerbos PDP. We provide an image to do this easily — simply run the following: ```bash docker run -i -t -p 3592:3592 \ -v $(pwd)/policies:/policies \ ghcr.io/cerbos/cerbos:0.45.1 \ server ``` If everything is correct, we should see the following output: ```bash 2022-12-07T16:43:40.626Z INFO cerbos.server maxprocs: Leaving GOMAXPROCS=4: CPU quota undefined 2022-12-07T16:43:40.626Z INFO cerbos.server Loading configuration from /conf.default.yaml 2022-12-07T16:43:40.630Z INFO cerbos.index Found 1 executable policies 2022-12-07T16:43:40.631Z INFO cerbos.telemetry Anonymous telemetry enabled. Disable via the config file or by setting the CERBOS_NO_TELEMETRY=1 environment variable 2022-12-07T16:43:40.631Z INFO cerbos.dir.watch Watching directory for changes {"dir": "/policies"} 2022-12-07T16:43:40.632Z INFO cerbos.http Starting HTTP server at :3592 2022-12-07T16:43:40.632Z INFO cerbos.grpc Starting gRPC server at :3593 ``` ## [](#%5Fsetting%5Fup%5Fthe%5Fserver)Setting up the server Having now set up both our Cerbos policy and our Prisma database, it is time to implement our web server. For this example we will be using Express to set up a simple server running on port 3000\. We will also import our Prisma and Cerbos clients which we will use later on. ```js mkdir src cat << EOF > src/index.ts import { PrismaClient } from "@prisma/client"; import express, { Request, Response } from "express"; import { GRPC as Cerbos } from "@cerbos/grpc"; const prisma = new PrismaClient(); const cerbos = new Cerbos("localhost:3592", { tls: false }); // The Cerbos PDP instance const app = express(); app.use(express.json()); const server = app.listen(3000, () => console.log("🚀 Server ready at: http://localhost:3000") ); EOF ``` Now we need to create our routes which we will authorize. For this simple example, we will create a `GET` for a contact resource. Using the Prisma client, query for the contact which matches the ID of the URL parameter. If it is not found, return an error message. Add the following to `src/index.ts`: ```js // Implementing an authentication provider is out of scope of this article and you will more than likely already have one in place, // So we build a static one here for indicative use const user = { "id": "1", // user id "role": "user", // single role (user, admin) "department": "Sales" // department of the user }; app.get("/contacts/:id", async ({ params }, res) => { // load the contact const contact = await prisma.contact.findUnique({ where: { id: params.id, }, }); if (!contact) return res.status(404).json({ error: "Contact not found" }); // TODO check authz and return a response }); ``` ## [](#%5Fauthorizing%5Frequests)Authorizing requests With our policy defined, we can call Cerbos from our request handler to authorize the principal to take the action on the resource. To do this, we need to update our `GET` handler and replace the `TODO` with a call to Cerbos; passing in the details about the user and the attributes of the contact resource, as well as the action being made: ```js // check user is authorized const decision = await cerbos.checkResource({ principal: { id: `${user.id}`, roles: [user.role], attributes: { department: user.department, }, }, resource: { kind: "contact", id: contact.id + '', attributes: JSON.parse(JSON.stringify(contact)), }, actions: ["read"], }); // authorized for read action if (decision.isAllowed("read")) { return res.json(contact); } else { return res.status(403).json({ error: "Unauthorized" }); } ``` In this case, we are only checking a single contact using the `checkResource` method. There is also a `checkResources` method available which supports batching resources into a single request (perhaps for use in a `list` endpoint). A `checkResources` call could be used like this: ```js const decision = await cerbos.checkResources({ principal: { id: `${user.id}`, roles: [user.role] attributes: { department: user.department, }, }, resources: [ { resource: { kind: "contact", id: contact.id + '', attributes: JSON.parse(JSON.stringify(contact)), }, actions: ["read"], }, ... ], }); decision.isAllowed({ resource: { kind: "contact", id: ${user.id} }, action: "read", }); // => true ``` Once we get the response back from Cerbos, calling the `.isAllowed` method for the required action (and optionally, the given resource ID in the `checkResources` case) will return a simple boolean of whether the user is authorized or not. Using this, we can either return the contact or throw an `HTTP 403 Unauthorized` response. ## [](#%5Fthe%5Fquery%5Fplanner)The query planner If we provide Cerbos with a `principal`, a description of the `resource` they’re trying to access and the required `action`, we can ask it for a query plan. Start by installing the following dependency: ```bash npm i express @cerbos/orm-prisma ``` Then add the following to `index.js`: ```js import { queryPlanToPrisma, PlanKind } from "@cerbos/orm-prisma"; app.get("/contacts", async (req, res) => { // Fetch the query plan from Cerbos passing in the principal // resource type and action const contactQueryPlan = await cerbos.planResources({ principal: { id: `${user.id}`, roles: [user.role], attributes: { department: user.department, }, }, resource: { kind: "contact", }, action: "read", }); // TODO convert query plan to a Prisma adapater instance }); ``` We can then use the [Cerbos Prisma ORM adapter](https://github.com/cerbos/query-plan-adapters/blob/main/prisma/README.md) to convert this query plan response, like so: ```js const queryPlanResult = queryPlanToPrisma({ queryPlan: contactQueryPlan, // map or function to change field names to match the prisma model mapper: { "request.resource.attr.ownerId": "ownerId", "request.resource.attr.department": "department", "request.resource.attr.active": "active", "request.resource.attr.marketingOptIn": "marketingOptIn", }, }); let contacts: any[]; if (queryPlanResult.kind === PlanKind.ALWAYS_DENIED) { contacts = []; } else { // Pass the filters in as where conditions // If you have prexisting where conditions, you can pass them in an AND clause contacts = await prisma.contact.findMany({ where: { AND: queryPlanResult.filters }, select: { firstName: true, lastName: true, active: true, marketingOptIn: true, }, }); } return res.json({ contacts, }); ``` In the case that the result `kind` is not `ALWAYS_DENIED`, we retrieve the filters from the adapter instance, and use them to construct a query using the Prisma ORM. ## [](#%5Ftrying%5Fit%5Fout)Trying it out Run the Cerbos PDP, as described above, and separately, fire up the node server as follows: ```bash npx ts-node src/index.ts ``` Then, hit it with some requests: ```bash curl -i http://localhost:3000/contacts/1 curl -i http://localhost:3000/contacts ``` ## [](#%5Fconclusion)Conclusion Through this simple example, we have used Prisma as our ORM to create a REST API which is authorized using Cerbos for a simple CRM system. This can be built upon to add more complex requirements, for example: * Checking the IP address of the request to ensure it is within the corporate IP range * Check if the incoming change is within an acceptable boundary eg only allow 20% discounts on a product unless an admin * Ensure only certain actions are done during work-hours You can find a sample repo of integrating Prisma and Cerbos in an Express server on [GitHub](https://github.com/cerbos/express-prisma-cerbos/), as well as many other example projects of implementing Cerbos. Tutorial: Using Cerbos with SQLAlchemy ==================== If you maintain an application that handles any _state_ at all, it’s likely that you’ve had to figure out how to both store that state, as well as how to load it into the application layer and act on it in any which way your business logic requires. Perhaps, in your case, a lot of the computational "heavy lifting" is done by the database, and the application is just an abstraction layer where you write your database queries. Or maybe on the contrary, the database is just a basic store which provides the data for the application to manage all of the tricky logic itself. Regardless, there’s _many_ ways to build an application (as the common idiom doesn’t go). Application design is a vast and complex process, but one thing we can do to make that process more manageable is to use tools that take a lot of the implementation complexity away…​ ## [](#%5Fenter%5Fsqlalchemy)Enter SQLAlchemy [SQLAlchemy](https://www.sqlalchemy.org/) has established itself as one of the standards in database abstraction layers in the Python world. It offers two distinct ways of communicating with the DB; via it’s lower-level `Core` SQL abstraction toolkit, or via it’s `ORM` component, which extends `Core` to offer some convenient, higher-level abstractions. ## [](#%5Fwhat%5Fwere%5Fbuilding)What we’re building In this run-through, we’ll be building an application that manages a "Contact directory", enabling users to keep track of their contacts, along with useful information such as employment information (current company etc). We’re going to explore how to model our data, map it to [Cerbos](https://cerbos.dev) entities, and interact with it in a clean, efficient and reusable way. We’ll be building a Python [FastAPI](https://fastapi.tiangolo.com/) server and securing it using the following Cerbos APIs: * `CheckResources`: e.g. can `User X` from the Sales department access `Contact Y`? * `PlanResources`: e.g. which contacts can `User X` from the Marketing department access? The full source code for this demo can be found in our repo [here](https://github.com/cerbos/python-sqlalchemy-cerbos). ## [](#%5Fprerequisites)Prerequisites * Python 3.10 * [SQLAlchemy](https://docs.sqlalchemy.org/en/14/) 1.4 / 2.0 * [Docker](https://www.docker.com/products/docker-desktop/) running locally. ## [](#%5Fthe%5Fdatabase)The database ### [](#%5Fsetting%5Fup%5Four%5Fmodels)Setting up our models We have the following entities within our application: * `User`: the person interacting with the application * `Contact`: a person within a `User’s` directory (a `User` can have many `Contacts`) * `Company`: the company that a `Contact` is currently employed with (a `Company` can have many `Contacts`) In order to persist and manage these models, we need to be able to represent them in code in a way that can be mapped to our database layer. This is where SQLAlchemy comes in. SQLAlchemy allows us to represent our relational database tables as classes, with attributes representing the columns of those tables. An object instance of one of these classes will represent a single row in the table. An example is shown below: ```python from sqlalchemy import Column, String from sqlalchemy.orm import declarative_base Base = declarative_base() class User(Base): __tablename__ = "user" id = Column(String, primary_key=True) username = Column(String(255)) email = Column(String(255)) # ... ``` It also allows us to go a step further, and model relationships between these tables (via variations of table joins). In our case, we want to be able to model the one-to-many relationships mentioned above: ```python class User(Base): __tablename__ = "user" # ... contacts = relationship("Contact", back_populates="owner") class Contact(Base): __tablename__ = "contact" id = Column(String, primary_key=True) # ... owner_id = Column(String, ForeignKey("user.id")) owner = relationship("User", back_populates="contacts", lazy="joined") ``` You can see how we relate the two tables via the `relationship` function. In setting a `relationship` field on _each_ linked class, we establish a bidirectional relationship between the objects (with the "reverse" side being a many-to-one). In this particular case, the `ForeignKey` placed on the child table infers the many-to-one side, and as such, allows for child table objects to reference the parent via `child.owner`/`child.owner_id`. The `lazy="joined"` parameter indicates to SQLAlchemy that we’d like to lazily load the related object at attribute access time. The full table definitions can be found in [this module](https://github.com/cerbos/python-sqlalchemy-cerbos/blob/main/app/models.py). You can see how SQLAlchemy ORM entity objects can then be used to reference each another in code: ```python from sqlalchemy import select # Session is a SQLAlchemy sessionmaker instance with Session() as s: user = s.scalars(select(User).where(User.username == "gandalf")).first() user.email # "greybeard99@midearth.com" # Note, in order to reference contacts with a `lazy` loading pattern, the # attribute lookup needs to occur in the context of a session - hence it's # in the Session() context manager scope. contact = user.contacts[0] contact.user_id == user.id # True ``` Check out the excellent [SQLAlchemy documentation](https://docs.sqlalchemy.org/en/14/orm/relationships.html) for more information on relationships. ### [](#%5Fconnecting%5Fto%5Four%5Fdatabase)Connecting to our database SQLAlchemy is a wonderful abstraction layer between Python and a whole array of different relational databases. By specifying the "dialect" when connecting to a DB engine, we tell it which relational database it is connecting to. For our demo, we’ll be setting up a simple, ephemeral SQLite instance. We won’t even persist it to disk; each time the application is started, it’ll build the DB in memory and populate it with a migration script. We create the engine like so: ```python from sqlalchemy import create_engine from sqlalchemy.pool import StaticPool engine = create_engine( "sqlite://", # the absence of a specified URL infers a `:memory:` database (e.g. no disk persistence) connect_args={"check_same_thread": False}, # in FastAPI, when using sync (def) functions, more than one thread could interact with the database # for the same request, so we need to make SQLite know that it should allow that poolclass=StaticPool, # Use a static pool to persist state with an in memory instance of sqlite ) ``` ### [](#%5Ftables%5Fand%5Fmetadata)Tables and metadata > To start using the SQLAlchemy Expression Language, we will want to have `Table` objects constructed that represent all of the database tables we are interested in working with. Each `Table` may be **declared**, meaning we explicitly spell out in source code what the table looks like, or may be **reflected**, which means we generate the object based on what’s already present in a particular database. > > Whether we will declare or reflect our tables, we start out with a collection that will be where we place our tables known as the MetaData object. This object is essentially a facade around a Python dictionary that stores a series of Table objects keyed to their string name. Our classes above inherit from a base class generated from a call to `declarative_base()`. This "declarative" method allows us to declare user-defined classes and `Table` metadata at once. Each time a class inherits from this `Base` class, it is added to this collection, or `registry`. The following call will generate the database tables from the metadata: ```python Base.metadata.create_all(engine) ``` ### [](#%5Fpopulating%5Fthe%5Fdatabase)Populating the database We can then generate a `session` from our `engine` instance, and use it to populate our newly generated tables: ```python with Session() as s: coca_cola = Company(name="Coca Cola") s.add(coca_cola) s.commit() john = User( name="John", username="john", email="john@cerbos.demo", role="user", department="Sales", ) s.add(john) s.commit() s.add(Contact( first_name="Nick", last_name="Smyth", marketing_opt_in=True, is_active=True, owner=john, company=coca_cola, )) s.commit() ``` You can see in the example above (in the `Contact` definition) how we can define relationships by referencing instances of the table classes. Again, the full source code for this section can be found [here](https://github.com/cerbos/python-sqlalchemy-cerbos/blob/main/app/models.py). ## [](#%5Fthe%5Fapi)The API We now have a database which can be declared and populated on demand, and models which allow us to interact with it. The next step is to build an API layer to expose the data, and to secure the endpoints and resources with Cerbos. We’ll be creating our server with FastAPI. The source code for this section can be found [here](https://github.com/cerbos/python-sqlalchemy-cerbos/blob/main/main.py). ### [](#%5Fdependency%5Finjection%5Fwith%5Ffastapi%5Fdependables)Dependency injection with FastAPI dependables FastAPI allows you to define callables called "dependables", which are functions that take all of the same arguments as a "path operation function" and return whatever we might require for the handler. The `Depends(fn)` class takes the callable and on execution will return the default argument, if required. We define a few dependables which we can use across our endpoints. Firstly, one to retrieve the cerbos Principal instance from the username (which in itself is retrieved via the FastAPI provided `HTTPBasic` dependable): ```python from fastapi import Depends, HTTPException, status from fastapi.security import HTTPBasic, HTTPBasicCredentials security = HTTPBasic() def get_principal(credentials: HTTPBasicCredentials = Depends(security)) -> Principal: username = credentials.username with Session() as s: # retrieve `user` from the DB to access the attributes user = s.scalars(select(User).where(User.username == username)).first() if user is None: raise HTTPException( status_code=status.HTTP_404_NOT_FOUND, detail="User not found", ) return Principal(user.id, roles={user.role}, attr={"department": user.department}) ``` This can then be used on all endpoints to authenticate the user, and then assert that the user exists in the database: ```python @app.get("/contacts") def get_contacts(p: Principal = Depends(get_principal)): # do something with the principal "p" ``` We then create a dependable which attempts to retrieve the `Contact` from the database based on a path parameter in the URL: `contact_id`: ```python def get_db_contact(contact_id: str) -> Contact: with Session() as s: contact = s.scalars(select(Contact).where(Contact.id == contact_id)).first() if contact is None: raise HTTPException( status_code=status.HTTP_404_NOT_FOUND, detail="Contact not found", ) return contact ``` This in turn can then be nested in another dependable which attempts to return the `Contact` Resource instance: ```python def get_resource_from_contact( db_contact: Contact = Depends(get_db_contact), ) -> Resource: return Resource( id=db_contact.id, kind="contact", attr=jsonable_encoder( {n.name: getattr(db_contact, n.name) for n in Contact.__table__.c} ), ) ``` These can then be used to attempt to retrieve a Cerbos `Resource`, or a SQLAlchemy `Contact` instance respectively, on routes which include the `contact_id` path parameter: ```python @app.delete("/contacts/{contact_id}") def delete_contact( r: Resource = Depends(get_resource_from_contact), p: Principal = Depends(get_principal), ): # do something with the resource @app.get("/contacts/{contact_id}") def get_contact( db_contact: Contact = Depends(get_db_contact), p: Principal = Depends(get_principal) ): # optionally, call the dependable direct to retrieve the resource from the db Contact instance resource = get_resource_from_contact(db_contact) ``` ### [](#%5Fdefining%5Four%5Fapi%5Fschema)Defining our API schema Some of our routes will require specific parameters in the payload in order to carry out the given request. For example, we might need routes for creating or updating new or existing `Contacts`. FastAPI provides a nice interface to enforce request schema via "Pydantic" models: ```python from pydantic import BaseModel class ContactSchema(BaseModel): first_name: str last_name: str owner_id: str company_id: str is_active: bool = False marketing_opt_in: bool = False class Config: # tell the Pydantic model to read the data even if it is not a dict, but an ORM model # (or any other arbitrary object with attributes) orm_mode = True ``` Once we’ve defined these schema models, we can use them in the FastAPI routes: ```python @app.post("/contacts/new") def create_contact( contact_schema: ContactSchema, p: Principal = Depends(get_principal) ): with CerbosClient(host="http://localhost:3592") as c: if not c.is_allowed( "create", p, Resource( id="new", kind="contact", ), ): raise HTTPException( status_code=status.HTTP_403_FORBIDDEN, detail="Unauthorized" ) db_contact = Contact(**contact_schema.dict()) with Session() as s: s.add(db_contact) s.commit() s.refresh(db_contact) return {"result": "Created contact", "contact": db_contact} ``` FastAPI will automatically validate the input payload to ensure required fields are present, and types are correct (as well as other optional checks). We can then use the schema model attributes to generate SQLAlchemy models as you can see above. The schema for our demo can be found in [this module](https://github.com/cerbos/python-sqlalchemy-cerbos/blob/main/app/schemas.py). ### [](#%5Fprotecting%5Fthe%5Froutes)Protecting the routes Now we have our dependables and API schema defined, we can start to define the routes and secure them using Cerbos. We can make granular checks against specific `principal:resource:action` mappings using Cerbos' `CheckResources` API (via the `is_allowed` method): ```python @app.get("/contacts/{contact_id}") def get_contact( db_contact: Contact = Depends(get_db_contact), p: Principal = Depends(get_principal) ): r = get_resource_from_contact(db_contact) with CerbosClient(host="http://localhost:3592") as c: if not c.is_allowed("read", p, r): raise HTTPException( status_code=status.HTTP_403_FORBIDDEN, detail="Unauthorized" ) return db_contact ``` However, sometimes, we want to establish which resources a Principal has access to. To do this, we can use the `PlanResources` API. ### [](#%5Fthe%5Fquery%5Fplanner)The query planner If we provide Cerbos with a `Principal` and a description of the resource they’re trying to access (`ResourceDesc`), we can ask it for a query plan. The `PlanResources` call returns one of the following: * `KIND_ALWAYS_ALLOWED` * `KIND_ALWAYS_DENIED` * `KIND_CONDITIONAL` In the final case, it’ll also return an abstract syntax tree (AST) of the condition that must be satisfied to allow the action: ```python @app.get("/contacts") def get_contacts(p: Principal = Depends(get_principal)): with CerbosClient(host="http://localhost:3592") as c: rd = ResourceDesc("contact") # Get the query plan for "read" action plan = c.plan_resources("read", p, rd) print(json.dumps(plan.to_dict(), sort_keys=False, indent=4)) ``` Cerbos provides a [SQLAlchemy adapter library](https://github.com/cerbos/query-plan-adapters/tree/main/sqlalchemy) with an API that takes the query plan response, and uses it to generate a SQLAlchemy query object. Continuing below: ```python query = get_query( plan, Contact, { "request.resource.attr.owner_id": User.id, "request.resource.attr.department": User.department, "request.resource.attr.is_active": Contact.is_active, "request.resource.attr.marketing_opt_in": Contact.marketing_opt_in, }, [(User, Contact.owner_id == User.id)], ) # Optionally reduce the returned columns (`with_only_columns` returns a new `select`) # NOTE: this is wise to do as standard, to avoid implicit joins generated by sqla `relationship()` usage, if present query = query.with_only_columns( Contact.id, Contact.first_name, Contact.last_name, Contact.is_active, Contact.marketing_opt_in, ) print(query.compile(compile_kwargs={"literal_binds": True})) ``` The provided `get_query` function accepts the following parameters, respectively: 1. query plan 2. a primary SQLAlchemy `Table` or ORM `DeclarativeMeta` type (the `FROM table` part of the resulting query) 3. the "attribute map" - responsible for mapping the Cerbos resource attribute strings to the associated SQLAlchemy columns (type `Column` or ORM `InstrumentedAttribute`) 4. OPTIONAL: list of explicit table joins - required only if more than one table specified in primary table + attribute map It returns a SQLAlchemy `Selectable`, which can be further extended/reduced, and then used to query the database: ```python # ... with Session() as s: rows = s.execute(query).fetchall() return rows ``` ### [](#%5Frun%5Fthe%5Fserver)Run the server Now we understand how everything works, let’s fire up the server and the Cerbos PDP, and test it out. Clone the repo: ```sh git clone git@github.com:cerbos/python-sqlalchemy-cerbos.git cd python-sqlalchemy-cerbos ``` Start up the Cerbos PDP instance docker container: ```sh cd cerbos ./start.sh ``` Install Python dependencies: ```sh # from project root pdm install ``` Start the FastAPI dev server: ```sh pdm run demo ``` ### [](#%5Fexample%5Frequests)Example requests #### [](#%5Fget%5Fall%5Fpermitted%5Fcontacts)Get all permitted contacts ```sh curl http://john@localhost:8000/contacts ``` #### [](#%5Fget%5Fa%5Fsingle%5Fcontact)Get a single contact Sales user, contact owned ⇒ `200 OK` ```sh curl -i http://john@localhost:8000/contacts/1 ``` Sales user, contact not owned or active ⇒ `403 Forbidden` ```sh curl -i http://john@localhost:8000/contacts/4 ``` #### [](#%5Fcreate%5Fa%5Fcontact)Create a contact Sales user ⇒ `200 OK` ```sh curl -i http://john@localhost:8000/contacts/new \ -H 'Content-Type: application/json' \ -X POST \ -d '{"first_name": "frodo", "last_name": "baggins", "owner_id": "2", "company_id": "2"}' ``` Marketing user (e.g. `geri`) ⇒ `403 Forbidden` #### [](#%5Fdelete%5Fa%5Fcontact)Delete a contact Contact owner ⇒ `200 OK` ```sh curl -i http://john@localhost:8000/contacts/1 -X DELETE ``` Non-owner ⇒ `403 Forbidden` ```sh curl -i http://john@localhost:8000/contacts/3 -X DELETE ``` --- If you have any questions or feedback, or to chat to us and other like-minded technologists, please join our [Slack community](https://community.cerbos.dev)! Administration ==================== * [User management](user-management.html) Audit log collection ==================== With just a simple configuration change, you can configure the PDPs to securely send audit logs to Cerbos Hub. This vastly simplifies the work that would otherwise be required to configure and deploy a log aggregation solution to securely collect, store and query audit logs from across your fleet. ## [](#%5Fenabling%5Fcollection)Enabling collection To get started, you need to obtain a set of client credentials. Navigate to the **Settings** → **Client credentials** tab of the deployment, click on **Generate a client credential** and generate a **Read & write** credential. Make sure to save the client secret in a safe place as it cannot be recovered. The client credentials can be provided to the PDP using environment variables or the configuration file. The environment variables to set are: | CERBOS\_HUB\_CLIENT\_ID | Client ID | | --------------------------- | ----------------------------------------------------------------------------- | | CERBOS\_HUB\_CLIENT\_SECRET | Client secret | | CERBOS\_HUB\_PDP\_ID | Optional. A unique name for the PDP. If not provided, a random value is used. | Alternatively, you can define these values in the Cerbos configuration file as follows: ```yaml hub: credentials: pdpID: "..." # Optional. Identifier for this Cerbos instance. clientID: "..." # ClientID clientSecret: "..." # ClientSecret ``` To enable audit log collection, configure the `hub` audit log backend with a local storage path. This local storage path is important for preserving the audit logs until they are safely saved to Cerbos Hub. If there are any network interruptions or if the PDP process crashes, the audit logs generated up to that point are saved on disk and will be sent to Cerbos Hub the next time the PDP starts. If you’re using a container orchestrator or a cloud-based solution to deploy Cerbos, attach a persistent storage volume at this path to ensure that the data does not get lost. ```yaml server: httpListenAddr: ":3592" # The port the HTTP server will listen on grpcListenAddr: ":3593" # The port the gRPC server will listen on hub: credentials: pdpID: "..." # Optional. Identifier for this Cerbos instance. clientID: "..." # ClientID clientSecret: "..." # ClientSecret audit: enabled: true backend: hub hub: storagePath: "..." # Local storage path for buffering the audit logs. ``` Refer to [Cerbos documentation](../cerbos/latest/configuration/audit.html) for details about common audit configurations that apply to all backends. | | To quickly try out the Cerbos Hub audit logs feature, you can use the following command. mkdir -p /tmp/cerbos && docker run --rm --name cerbos \\ \-p 3592:3592 -p 3593:3593 \\ \-e CERBOS\_HUB\_CLIENT\_ID="..." \\ \-e CERBOS\_HUB\_CLIENT\_SECRET="..." \\ \-v /tmp/cerbos:/audit\_logs \\ ghcr.io/cerbos/cerbos:latest server \\ \--set=audit.enabled=true \\ \--set=audit.backend=hub \\ \--set=audit.hub.storagePath=/audit\_logs | | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ## [](#%5Faccessing%5Fthe%5Flogs)Accessing the logs Once collection is enabled, the logs can accessed via the _Audit logs_ tab in Cerbos Hub. From here you can select which logs to view: ### [](#%5Fdecision%5Flogs)Decision logs The decision logs are records of the authorization decisions made by the Cerbos PDP. These logs provide detailed information about each decision, including the inputs, Cerbos policies evaluated, and the `ALLOW/DENY` decisions along with any outputs from rule evaluation. Besides providing a comprehensive audit trail, these records can be used for troubleshooting purposes and to understand how Cerbos policies are used within your organization. There are two views in this section. The JSON view provides access to the raw log entry while the decision view provides a compact view consisting of the most pertinent details extracted from the log entry. ![Decision logs](_images/audit_log_decision.png) ![Decision logs JSON](_images/audit_log_decision_json.png) In the decision view, clicking on a log entry will open the details pane which shows the full request and response data, including the principal, resource, action, policy decision, and other metadata. In the case of a `PlanResources` request, the detailed plan is also shown. ![Decision logs detail](_images/audit_log_decision_detail.png) In addition to viewing a particular time range, you can further filter the logs by a particular PDP ID, principal, resource kind, action, or policy decision as well. ### [](#%5Faccess%5Flogs)Access logs ![Access logs](_images/audit_log_access.png) The access logs are records of all the API requests received by the Cerbos PDP. The valid `CheckResources` and `PlanResources` API calls would have a corresponding decision log entry with the the same call ID. API requests that are invalid or unauthenticated are logged as well and can be used for identifying misconfigured clients or unauthorized access attempts. You can filter the logs by a particular PDP and a time range. ## [](#%5Fmasking%5Fsensitive%5Ffields)Masking sensitive fields You can define masks to filter out sensitive or personally identifiable information (PII) that might be included in the audit log entries. Masked fields are removed locally at the PDP and are never transmitted to Cerbos Hub. Masks are defined using a subset of JSONPath syntax. ```yaml hub: credentials: pdpID: "..." # Optional. Identifier for this Cerbos instance. clientID: "..." # ClientID clientSecret: "..." # ClientSecret audit: enabled: true backend: hub hub: storagePath: "..." # Local storage path for buffering the audit logs. mask: # Fields to mask from CheckResources requests checkResources: - inputs[*].principal.attr.foo - inputs[*].auxData - outputs # Fields to mask from the metadata metadata: - authorization # Fields to mask from the peer information peer: - address - forwarded_for # Fields to mask from the PlanResources requests. planResources: - input.principal.attr.nestedMap.foo ``` | | Use the [local audit backend](../cerbos/latest/configuration/audit.html#local) along with [cerbosctl audit commands](../cerbos/latest/cli/cerbosctl.html#audit) to inspect your audit logs locally and determine the paths that need to be masked. | | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | | In order to avoid slowing down request processing and to avoid any data losses, raw log entries are stored locally on disk at the storage path. The masks are applied later by the background process that syncs the on-disk log entries to Cerbos Hub. To avoid storing authentication tokens and other sensitive request parameters locally, use the top-level includeMetadataKeys and excludeMetadataKeys settings. Refer to [Cerbos audit configuration](../cerbos/latest/configuration/audit.html) for more details. | | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ## [](#%5Fadvanced%5Fconfiguration)Advanced configuration Advanced users can tune the local log retention period and other buffering settings. We generally do not recommend changing the default values unless absolutely necessary. ```yaml audit: enabled: true backend: hub hub: storagePath: "..." # Local storage path for buffering the audit logs. retentionPeriod: 168h # How long to keep buffered records on disk. advanced: bufferSize: 16 # Size of the memory buffer. Increasing this will use more memory and the chances of losing data during a crash. maxBatchSize: 16 # Write batch size. If your records are small, increasing this will reduce disk IO. flushInterval: 30s # Time to keep records in memory before committing. gcInterval: 15m # How often the garbage collector runs to remove old entries from the log. ``` Concepts ==================== Client credentials Used to establish an authenticated connection to Cerbos Hub using a client ID and a secret. Client credentials are either scoped to a deployment or a managed policy store. They can be created from the **Settings** → **Client credentials** section of Cerbos Hub. Deployment A deployment is a specific configuration of policy stores (such as ‘production’ or ‘staging’) that can be connected to a set of PDPs. Each new change to the underlying store(s) results in a new policy build that’s automatically delivered to the PDPs if the tests are successful. Organization An Organization serves as the top-level entity in Cerbos Hub and provides centralized control over billing, access control, and Workspace management. Typically a business would have one Organization and a number of Workspaces underneath it. Policy bundle An encrypted file containing optimised binary representations of policies corresponding to a git commit. On every commit to the policy repository, if the git reference of the commit matches a configured label, Cerbos Hub validates the policies in the new commit, runs tests if there are any and produces a policy bundle that is then pushed to all connected PDPs that are configured to watch that label. Policy Decision Point (Service PDP) The open source Cerbos server instances that you run in your own infrastructure are called service PDPs. Cerbos Hub is the management control plane for PDP instances that are running inside your environment. Rather than each PDP being responsible for detecting policy changes, parsing, compiling and loading them, they get pre-compiled policy bundles pushed to them from Cerbos Hub. This model ensures that all your data remains within your network perimeter and that authorization checks happen locally with low latency while reducing the overhead of policy updates and the time it takes for the whole fleet to get in sync. A PDP must be configured with the name of a label, workspace secret and client credentials in order to connect to Cerbos Hub. Policy playground A browser-based policy editor to quickly prototype, test and collaborate on Cerbos policies. An organization can have multiple playground instances and all authorized users of the organization have access to those instances. Policy store A versioned, cloud-based storage container for Cerbos policies. A policy store can be either linked to a supported git provider for automatic mirroring or managed manually using the Cerbos Hub user interface, Cerbos Hub SDKs or the [cerbosctl](../cerbos/latest/cli/cerbosctl.html) utility. Multiple stores can be connected to a single deployment, making it easy to manage policies by teams or tenants or any other desired level of organization and combine them all at deployment time to distribute to PDPs. Workspace A Workspace encompasses a set of users, policy stores and deployments to help organize your work by teams, departments, tenants or any other desired form of separating responsibilities. Service Policy Decision Point ==================== The open source Cerbos server instances that you run in your own infrastructure are called service PDPs. Cerbos Hub is the management control plane for PDP instances that are running inside your environment. Rather than each PDP being responsible for detecting policy changes, parsing, compiling, and loading them, they get pre-compiled policy bundles pushed to them from Cerbos Hub. This model ensures that all your data remains within your network perimeter and that authorization checks happen locally with low latency while reducing the overhead of policy updates and the time it takes for the whole fleet to get in sync. A PDP must be configured with the ID of a deployment and client credentials in order to connect to Cerbos Hub. ## [](#%5Fdeploying%5Fa%5Fpdp)Deploying a PDP Connecting to Cerbos Hub is a matter of configuring the `hub` storage driver, which can be configured using the configuration file, environment variables or command line arguments. The simplest method to get a connected PDP up and running is to run the container with configuration passed via environment variables: ```shell docker run --rm --name cerbos \ -p 3592:3592 -p 3593:3593 \ -e CERBOS_HUB_DEPLOYMENT_ID="..." \ -e CERBOS_HUB_CLIENT_ID="..." \ -e CERBOS_HUB_CLIENT_SECRET="..." \ -e CERBOS_HUB_PDP_ID="..." \ ghcr.io/cerbos/cerbos:latest server ``` The environment variables to set are: | CERBOS\_HUB\_DEPLOYMENT\_ID | The deployment ID to load policies from | | --------------------------- | ---------------------------------------------------------------------------------------------------------------- | | CERBOS\_HUB\_CLIENT\_ID | Client ID | | CERBOS\_HUB\_CLIENT\_SECRET | Client secret | | CERBOS\_HUB\_PDP\_ID | Optional. The name shown for the PDP in the Cerbos Hub monitoring page. If not provided, a random value is used. | Alternatively, you can define these values in the Cerbos configuration file as follows: ```yaml server: httpListenAddr: ":3592" # The port the HTTP server will listen on grpcListenAddr: ":3593" # The port the gRPC server will listen on hub: credentials: pdpID: "..." # Optional. Identifier for this Cerbos instance. clientID: "..." # ClientID clientSecret: "..." # ClientSecret storage: driver: hub hub: remote: deploymentID: latest # The deployment ID to load policies for ``` Assuming you saved the configuration file as `.cerbos.yaml` in the current directory, you can start Cerbos as follows: ```shell docker run --rm --name cerbos \ -v $(pwd):/conf \ -p 3592:3592 -p 3593:3593 \ ghcr.io/cerbos/cerbos:latest server --config=/conf/.cerbos.yaml ``` See [Configuration](../cerbos/latest/configuration/index.html) for more information about configuring Cerbos. ## [](#%5Fmonitoring)Monitoring The Decsion points page in Cerbos Hub provides a view of all the recently connected PDP instances of the workspace. ![Connected instances](_images/connected_pdps.png) Deployments ==================== A deployment is a specific configuration of [policy stores](policy-stores.html) (such as ‘production’ or ‘staging’) that can be connected to a set of PDPs. Each new change to the underlying store(s) results in a new policy build that’s automatically delivered to the [policy decision points (PDPs)](decision-points.html) if the tests are successful. Source agnostic inputs Populate a policy store from any Git provider, CI system, API, CLI, or direct upload, so your existing workflows remain intact. Multi-store composition Reference multiple stores in a deployment to separate ownership, for example security team versus product team, or to blend static Git-managed policies with dynamic API-driven rules. End-to-end automation Building, testing, and distribution of policies is fully managed by Cerbos Hub, giving you a consistent CI/CD style pipeline for authorization without the need for extra infrastructure. Strong versioning Every deployment attempt is attached to a set of immutable policy store versions, making it easy to audit exactly which policies were in effect at any given point in time and to revert any changes if needed. ## [](#%5Fbuild%5Flife%5Fcycle)Build life cycle Whenever Cerbos Hub detects a change in any policy store connected to a deployment, it launches a new build. 1. **In progress**: The build is listed in the Build section of the deployment with the status **In progress**. 2. **Compilation failures**: If policy compilation fails, the error is surfaced next to the build version so you can diagnose it quickly. 3. **Test execution**: After successful compilation, Cerbos Hub runs all policy tests found across the contributing stores. Failures are displayed in the build details view with full logs for debugging. 4. **Bundle generation**: When compilation and tests pass, the bundle status changes to **Generated** and all PDPs that are assigned to this deployment receive a push notification instructing them to download and activate the bundle immediately. For details on creating policy stores and connecting PDPs to receive bundles, see the related guides: * [Policy stores](policy-stores.html) * [PDP configuration](decision-points.html) ## [](#%5Fbest%5Fpractices)Best practices Use meaningful names Name deployments after their purpose such as application, environment, or team, for example payments-service-production. Automate testing Include comprehensive test cases with each policy store to catch regressions before they reach production PDPs. Validate in staging Use staging deployments to verify policy changes in a pre-production environment before promoting to production. Getting started ==================== ## [](#%5Fprerequisites)Prerequisites * A set of Cerbos policies. An example set of policies are avaliable at . * Cerbos version 0.45.1 or higher. * Outbound internet access from your Cerbos instances so that they can connect to Cerbos Hub to fetch bundle updates and, if enabled, upload audit logs. ## [](#%5Fcreate%5Fa%5Fpolicy%5Fstore)Create a policy store Cerbos Hub uses policy stores to manage your policies. A policy store is a collection of policies and tests that can be built into a deployment and distributed to Cerbos PDPs. For the guick start, you can create a policy store using the browser and upload a ZIP file containing policies ([example](https://github.com/cerbos/example-cerbos-policy-repository/archive/refs/heads/main.zip)) or fork the GitHub [example repository](https://github.com/cerbos/example-cerbos-policy-repository) and connect it to Cerbos Hub. ### [](#%5Fupload%5Fpolicies%5Fvia%5Fbrowser)Upload policies via browser 1. Sign in to Cerbos Hub at and follow the on-boarding wizard to create an Organization and its first Workspace. 2. Inside the Workspace, select **Policy stores** then **New store**. 3. Give the store a clear name, for example `orders-service`, choose **Browser upload** as the source, and click **Create**. 4. In the store detail page, click **Upload files** and select a ZIP file containing your policies. The ZIP file should contain the policies in the root directory, not in a subdirectory. 5. Cerbos Hub immediately ingests the ZIP file, compiles the policies, and shows the first successful build. ### [](#%5Fgithub%5Frepository)GitHub repository 1. Sign in to Cerbos Hub at and follow the on-boarding wizard to create an Organization and its first Workspace. 2. Inside the Workspace, select **Policy stores** then **New store**. 3. Give the store a clear name, for example `orders-service`, choose **GitHub repository** as the source and connect to your GitHub account. 4. Pick the branch you want Hub to track, usually `main`, and save. Cerbos Hub immediately ingests the repository, compiles the policies, and shows the first successful build. | | You can create additional stores for other branches, teams or projects. | | -------------------------------------------------------------------------- | ## [](#%5Fcreate%5Fa%5Fdeployment)Create a Deployment Deployments package policies from one or more policy stores into versioned bundles that are automatically distributed to connected Cerbos PDPs. 1. Open **Deployments** then click **New deployment**. 2. Select the store you just created. 3. Click **Create**. Hub starts the initial build. When it finishes, note the deployment ID shown on the detail page. You will need this ID to configure the PDP. ## [](#%5Fgenerate%5Fclient%5Fcredentials)Generate client credentials Navigate to **Settings** → **Client credentials** and click **Generate a client credential**, giving it a name and select **Read & Write** so that policies can be pulled down and Audit Logs pushed back. Copy both the Client ID and Client secret. The secret is shown only once. ## [](#%5Fconfigure%5Fand%5Frun%5Fa%5Fcerbos%5Fpdp)Configure and run a Cerbos PDP You can pass the Hub connection settings as environment variables or in a YAML configuration file. The example below uses environment variables for a quick start: ```shell docker run --rm --name cerbos -p 3592:3592 -p 3593:3593 -e CERBOS_HUB_DEPLOYMENT_ID="..." \ # Deployment ID from Hub -e CERBOS_HUB_CLIENT_ID="..." \ # From Deployment ▸ Client credentials -e CERBOS_HUB_CLIENT_SECRET="..." \ # From Deployment ▸ Client credentials ghcr.io/cerbos/cerbos:latest server ``` Optional variable: CERBOS\_HUB\_PDP\_ID The friendly name that will appear on the Cerbos Hub monitoring page. If omitted a random identifier is generated. ### [](#%5Fyaml%5Falternative)YAML alternative ```yaml server: httpListenAddr: ":3592" grpcListenAddr: ":3593" hub: credentials: pdpID: "orders-pdp-01" # Optional clientID: "..." clientSecret: "..." storage: driver: hub hub: remote: deploymentID: "..." # Deployment ID from Hub ``` Assuming you saved the file as .cerbos.yaml in the current directory, start Cerbos with: ```shell docker run --rm --name cerbos -v $(pwd):/conf -p 3592:3592 -p 3593:3593 ghcr.io/cerbos/cerbos:latest server --config=/conf/.cerbos.yaml ``` See [Configuration](../cerbos/latest/configuration/index.html) for advanced configuration options. ## [](#%5Fenable%5Faudit%5Flog%5Fcollection%5Foptional)Enable audit log collection (optional) Add the Hub audit backend to stream decision logs to Cerbos Hub: ```yaml audit: backend: hub hub: storagePath: "/var/cerbos/audit-buffer" # Local buffer used when the network is unavailable ``` Refer to [Audit log collection](audit-log-collection.html) for details on filtering sensitive fields and other advanced options. With a policy store connected, a deployment created, and at least one PDP running, you are ready to iterate on your policies. Push a change to the repository, watch Cerbos Hub build a new deployment version, and see the PDP update itself automatically within seconds. Cerbos Hub ==================== Cerbos Hub simplifies the process of authoring authorization policies, testing changes, rolling out updates to production, and aggregating audit logs about authorization decisions. It is a scalable solution for developers who want to save time, streamline their workflows, and confidently roll out authorization updates, freeing you to focus on delivering great products to your customers. ## [](#%5Ffeatures)Features Collaborative policy editing Cerbos Hub playgrounds provide private, collaborative, IDE-like development environments to help author and test policies with ease. Managed build and release pipeline Cerbos Hub automatically validates, tests, signs, and distributes every policy change, giving you a turnkey CI/CD pipeline without extra infrastructure. Source agnostic policy stores Populate policy stores from any source using any of the many integration methods available. Coordinated rollout of policy changes Cerbos Hub pushes new policy bundles to every connected PDP instance, ensuring fleet-wide consistency and eliminating manual polling or reload logic. PDP monitoring Cerbos Hub shows which policies each PDP is serving, the exact bundle version, and when the instance was last seen. Audit log aggregation With one line of configuration you can stream PDP decision logs to Cerbos Hub, filter sensitive fields locally, and retain searchable history without running a separate log stack. ## [](#%5Fhow%5Fit%5Fworks)How it works Cerbos Hub is a cloud-hosted management control plane, while Cerbos instances and the data they process remain strictly inside your network perimeter. Switching to Cerbos Hub requires only a minor configuration change to your existing Cerbos deployment. After the switch, PDPs receive optimized policy bundles from Cerbos Hub instead of compiling policies locally. ![How Cerbos Hub works](_images/how_cerbos_hub_works.png) 1. Make a change to policies and submit it to a policy store through Git, a CI pipeline, an API call, a CLI upload, or a direct drag and drop in the browser. 2. Cerbos Hub detects the update and starts a new build. 3. Validate and compile the policies. 4. Run all policy tests found in the store. 5. Generate a compact encrypted policy bundle. 6. Increment the deployment version and notify every PDP that is assigned to this deployment that a new bundle is available. 7. PDP instances download the new bundle and start serving it immediately. Optionally, configure Cerbos Hub as an audit backend for the PDPs. Logs are streamed securely, with sensitive data removed locally before leaving your network perimeter. Collaborative policy playgrounds ==================== Cerbos Hub playgrounds are fully-interactive, private development environments that provide an IDE-like experience for authoring policies. Quickly create or edit policies and test fixtures with instant feedback on syntax issues and test failures. Take advantage of the powerful collaborative editing features to pair with colleagues to develop new authorization rules or use it as a sandbox to train new team members on Cerbos policies. ## [](#%5Fcreating%5Fa%5Fplayground)Creating a playground Login to Cerbos Hub and select one of the organizations you’re a member of. Click on the **Playgrounds** tab to view existing playgrounds or to create a new one. When creating a new playground, you have a number of options: * Create an empty playground to start from scratch * Generate a starter RBAC policy by answering a few questions about your user roles, resources and actions * Start with an example template set of policies covering common use cases After creating a playground, click on the **Video tutorial** button on the top right corner of the screen to learn the basics. ## [](#%5Fplayground%5Fengine%5Fsettings)Playground engine settings The playground engine settings in the Settings tab allow you to configure the [Cerbos PDP engine](../cerbos/latest/configuration/engine.html) used when evaluating policy during development. * **Default policy version**: When a request does not explicitly specify the policy version, the Cerbos engine attempts to find a matching policy that has its version set to `default`. You can change this fallback value by setting the default policy version. * **Lenient scope search**: When lenient scope search is enabled, if a policy with scope `a.b.c` does not exist in the store, Cerbos will attempt to find scopes `a.b`, `a` and ``` `` ``` in that order. * **Globals**: Global variables are a way to pass environment-specific information to policy conditions. Values defined here are exposed to policy conditions via the `globals` object. You can find full details of these settings in the [Cerbos configuration reference](../cerbos/latest/configuration/engine.html). ## [](#%5Ftry%5Fthe%5Fapi)Try the API One of the key features of the playground is the ability to try out authorization checks against your policies without having to run a local Policy Decision Point (PDP). In the Implement tab of the sidebar, you can experiment with both the Check API and the Plan API. This section allows you to select from your test fixtures and view the request structure needed to call the PDP, as well as the expected response. ![Try the API](_images/playground_try_api.png) ## [](#%5Fconnect%5Fa%5Fpdp%5Fto%5Fa%5Fplayground)Connect a PDP to a playground For developers looking to test the integration of their application, Cerbos Hub offers a Playground PDP connection. This feature allows you to start up a PDP locally in your development environment and connect it to your current playground instance. Any changes you make in the playground are immediately reflected in your local PDP, enabling fast iteration and providing a real-time feedback loop for your integration efforts. Instructions for starting up a local PDP can be found in the Implement tab under "Connect a PDP" ![Connect a PDP](_images/playground_connect_pdp.png) | | Note that while the Playground PDP connection is an excellent tool for rapid development and testing, it’s not intended for production use. When you’re ready to release your application to production environments, you should [create a workspace](getting-started.html) with a policy store and configure the PDPs to receive bundle updates via Cerbos Hub’s fully managed CI/CD pipeline. It also allows you to take advantage of Cerbos Hub audit log collection to effortlessly store and analyze all the decisions made by your PDPs. | | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | Policy stores: CLI upload (binary) ==================== ## [](#%5Finstallation)Installation `cerbosctl` binaries are available for multiple operating systems and architectures. See the [releases page](https://github.com/cerbos/cerbos/releases/tag/v0.45.1) for all available downloads. | OS | Arch | Bundle | | ----- | --------- | ----------------------------------------- | | Linux | x86-64 | cerbosctl\_0.45.1\_Linux\_x86\_64.tar.gz | | Linux | arm64 | cerbosctl\_0.45.1\_Linux\_arm64.tar.gz | | MacOS | universal | cerbosctl\_0.45.1\_Darwin\_all.tar.gz | | MacOS | x86-64 | cerbosctl\_0.45.1\_Darwin\_x86\_64.tar.gz | | MacOS | arm64 | cerbosctl\_0.45.1\_Darwin\_arm64.tar.gz | You can download the binaries by running the following command. Substitute `` with the appropriate value from the above table. ```sh curl -L -o cerbosctl.tar.gz "https://github.com/cerbos/cerbos/releases/download/v0.45.1/" tar xvf cerbosctl.tar.gz chmod +x cerbosctl mv cerbosctl /usr/local/bin/ # or somewhere on your PATH ``` | | Cerbos binaries are signed using [sigstore](https://www.sigstore.dev) tools during the automated build process and the verification bundle is published along with the binary as .bundle. The following example demonstrates how to verify the Linux X86\_64 bundle archive. \# Download the bundle archive curl -L \\ \-o cerbosctl\_0.45.1\_Linux\_x86\_64.tar.gz \\ "https://github.com/cerbos/cerbos/releases/download/v0.45.1/cerbosctl\_0.45.1\_Linux\_x86\_64.tar.gz" \# Download the verification bundle curl -L \\ \-o cerbosctl\_0.45.1\_Linux\_x86\_64.tar.gz.bundle \\ "https://github.com/cerbos/cerbos/releases/download/v0.45.1/cerbosctl\_0.45.1\_Linux\_x86\_64.tar.gz.bundle" \# Verify the signature cosign verify-blob \\ \--certificate-oidc-issuer="https://token.actions.githubusercontent.com" \\ \--certificate-identity="https://github.com/cerbos/cerbos/.github/workflows/release.yaml@refs/tags/v0.45.1" \\ \--bundle="cerbosctl\_0.45.1\_Linux\_x86\_64.tar.gz.bundle" \\ "cerbosctl\_0.45.1\_Linux\_x86\_64.tar.gz" | | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ## [](#%5Fusage)Usage The `cerbosctl` CLI tool can be used to upload policies to a policy store in Cerbos Hub. First generate a set of client credentials for the policy store in Cerbos Hub - you can do this in the **Client credentials** section in the UI. Make sure to select the `Read & Write` option when creating the credentials to allow uploading policies. Then export the following environment variables with the values from the generated client credentials and the store ID: ```sh export CERBOS_HUB_CLIENT_ID=... export CERBOS_HUB_CLIENT_SECRET=... export CERBOS_HUB_STORE_ID=... ``` The following command uploads policy files from the current directory and replaces all the files in the store. ```sh cerbosctl hub store replace-files . ``` ## [](#%5Ffull%5Fcli%5Freference)Full CLI Reference ```none Usage: cerbosctl hub store --store-id=STRING --client-id=STRING --client-secret=STRING [flags] Interact with Cerbos Hub managed stores. Requires an existing managed store and the API credentials to access it. The store ID and credentials can be provided using either command-line flags or environment variables. Flags: -h, --help Show context-sensitive help. --store-id=STRING ID of the store to operate on ($CERBOS_HUB_STORE_ID) --client-id=STRING Client ID of the access credential ($CERBOS_HUB_CLIENT_ID) --client-secret=STRING Client secret of the access credential ($CERBOS_HUB_CLIENT_SECRET) Commands: hub store list-files --store-id=STRING --client-id=STRING --client-secret=STRING [flags] List store files hub store get-files --store-id=STRING --client-id=STRING --client-secret=STRING --output-path=STRING ... [flags] Download files from the store hub store download --store-id=STRING --client-id=STRING --client-secret=STRING [flags] Download the entire store hub store replace-files --store-id=STRING --client-id=STRING --client-secret=STRING [flags] Overwrite the store with the given set of files hub store add-files --store-id=STRING --client-id=STRING --client-secret=STRING ... [flags] Add files to the store hub store delete-files --store-id=STRING --client-id=STRING --client-secret=STRING ... [flags] Delete files from the store ``` Policy stores: CLI upload (Container) ==================== The `cerbosctl` CLI tool is also available as a Docker container image: Via ghcr.io ```sh docker run --rm -it ghcr.io/cerbos/cerbosctl:latest hub store ``` Via docker.io ```sh docker run --rm -it docker.io/cerbos/cerbosctl:latest hub store ``` ## [](#%5Fusage)Usage The `cerbosctl` container can be used to upload policies to a policy store in Cerbos Hub. First generate a set of client credentials for the policy store in Cerbos Hub - you can do this in the **Client credentials** section in the UI. Make sure to select the `Read & Write` option when creating the credentials to allow uploading policies. Then export the following environment variables with the values from the generated client credentials and the store ID: ```sh export CERBOS_HUB_CLIENT_ID=... export CERBOS_HUB_CLIENT_SECRET=... export CERBOS_HUB_STORE_ID=... ``` The following command uploads policy files from the policies directory and replaces all the files in the store. ```sh docker run -it {cerbosctl-docker-img} \ -e CERBOS_HUB_CLIENT_ID=$CERBOS_HUB_CLIENT_ID \ -e CERBOS_HUB_CLIENT_SECRET=$CERBOS_HUB_CLIENT_SECRET \ -e CERBOS_HUB_STORE_ID=$CERBOS_HUB_STORE_ID \ -v $(pwd):/policies \ hub store replace-files /policies . ``` ## [](#%5Ffull%5Fcli%5Freference)Full CLI Reference ```none Usage: cerbosctl hub store --store-id=STRING --client-id=STRING --client-secret=STRING [flags] Interact with Cerbos Hub managed stores. Requires an existing managed store and the API credentials to access it. The store ID and credentials can be provided using either command-line flags or environment variables. Flags: -h, --help Show context-sensitive help. --store-id=STRING ID of the store to operate on ($CERBOS_HUB_STORE_ID) --client-id=STRING Client ID of the access credential ($CERBOS_HUB_CLIENT_ID) --client-secret=STRING Client secret of the access credential ($CERBOS_HUB_CLIENT_SECRET) Commands: hub store list-files --store-id=STRING --client-id=STRING --client-secret=STRING [flags] List store files hub store get-files --store-id=STRING --client-id=STRING --client-secret=STRING --output-path=STRING ... [flags] Download files from the store hub store download --store-id=STRING --client-id=STRING --client-secret=STRING [flags] Download the entire store hub store replace-files --store-id=STRING --client-id=STRING --client-secret=STRING [flags] Overwrite the store with the given set of files hub store add-files --store-id=STRING --client-id=STRING --client-secret=STRING ... [flags] Add files to the store hub store delete-files --store-id=STRING --client-id=STRING --client-secret=STRING ... [flags] Delete files from the store ``` Policy stores: CLI upload (Homebrew) ==================== ## [](#%5Finstallation)Installation `cerbosctl` binaries are available via Homebrew for simple installation on macOS. To install the `cerbosctl` CLI tool, run the following command: ```sh brew tap cerbos/tap brew install cerbos ``` ## [](#%5Fusage)Usage The `cerbosctl` CLI tool can be used to upload policies to a policy store in Cerbos Hub. First generate a set of client credentials for the policy store in Cerbos Hub - you can do this in the **Client credentials** section in the UI. Make sure to select the `Read & Write` option when creating the credentials to allow uploading policies. Then export the following environment variables with the values from the generated client credentials and the store ID: ```sh export CERBOS_HUB_CLIENT_ID=... export CERBOS_HUB_CLIENT_SECRET=... export CERBOS_HUB_STORE_ID=... ``` The following command uploads policy files from the current directory and replaces all the files in the store. ```sh cerbosctl hub store replace-files . ``` ## [](#%5Ffull%5Fcli%5Freference)Full CLI Reference ```none Usage: cerbosctl hub store --store-id=STRING --client-id=STRING --client-secret=STRING [flags] Interact with Cerbos Hub managed stores. Requires an existing managed store and the API credentials to access it. The store ID and credentials can be provided using either command-line flags or environment variables. Flags: -h, --help Show context-sensitive help. --store-id=STRING ID of the store to operate on ($CERBOS_HUB_STORE_ID) --client-id=STRING Client ID of the access credential ($CERBOS_HUB_CLIENT_ID) --client-secret=STRING Client secret of the access credential ($CERBOS_HUB_CLIENT_SECRET) Commands: hub store list-files --store-id=STRING --client-id=STRING --client-secret=STRING [flags] List store files hub store get-files --store-id=STRING --client-id=STRING --client-secret=STRING --output-path=STRING ... [flags] Download files from the store hub store download --store-id=STRING --client-id=STRING --client-secret=STRING [flags] Download the entire store hub store replace-files --store-id=STRING --client-id=STRING --client-secret=STRING [flags] Overwrite the store with the given set of files hub store add-files --store-id=STRING --client-id=STRING --client-secret=STRING ... [flags] Add files to the store hub store delete-files --store-id=STRING --client-id=STRING --client-secret=STRING ... [flags] Delete files from the store ``` Policy stores: CLI upload (npx) ==================== ## [](#%5Finstallation)Installation `cerbosctl` binaries are published to npm for easy installation. To install the `cerbosctl` CLI tool, run the following command: ```sh npm install -g cerbosctl ``` Alternatively, you can use `npx` to run the CLI without installing it globally. ## [](#%5Fusage)Usage The `cerbosctl` CLI tool can be used to upload policies to a policy store in Cerbos Hub. First generate a set of client credentials for the policy store in Cerbos Hub - you can do this in the **Client credentials** section in the UI. Make sure to select the `Read & Write` option when creating the credentials to allow uploading policies. Then export the following environment variables with the values from the generated client credentials and the store ID: ```sh export CERBOS_HUB_CLIENT_ID=... export CERBOS_HUB_CLIENT_SECRET=... export CERBOS_HUB_STORE_ID=... ``` The following command uploads policy files from the current directory and replaces all the files in the store. ### [](#%5Fvia%5Fglobal%5Fnpm%5Finstallation)via global NPM installation ```sh cerbosctl hub store replace-files . ``` ### [](#%5Fvia%5Fnpx)via npx ```sh npx cerbosctl hub store replace-files . ``` ## [](#%5Ffull%5Fcli%5Freference)Full CLI Reference ```none Usage: cerbosctl hub store --store-id=STRING --client-id=STRING --client-secret=STRING [flags] Interact with Cerbos Hub managed stores. Requires an existing managed store and the API credentials to access it. The store ID and credentials can be provided using either command-line flags or environment variables. Flags: -h, --help Show context-sensitive help. --store-id=STRING ID of the store to operate on ($CERBOS_HUB_STORE_ID) --client-id=STRING Client ID of the access credential ($CERBOS_HUB_CLIENT_ID) --client-secret=STRING Client secret of the access credential ($CERBOS_HUB_CLIENT_SECRET) Commands: hub store list-files --store-id=STRING --client-id=STRING --client-secret=STRING [flags] List store files hub store get-files --store-id=STRING --client-id=STRING --client-secret=STRING --output-path=STRING ... [flags] Download files from the store hub store download --store-id=STRING --client-id=STRING --client-secret=STRING [flags] Download the entire store hub store replace-files --store-id=STRING --client-id=STRING --client-secret=STRING [flags] Overwrite the store with the given set of files hub store add-files --store-id=STRING --client-id=STRING --client-secret=STRING ... [flags] Add files to the store hub store delete-files --store-id=STRING --client-id=STRING --client-secret=STRING ... [flags] Delete files from the store ``` Cerbos Hub GitHub Integration ==================== The Cerbos Hub GitHub integration allows you to manage your policies in a GitHub repository. This integration supports both public and private repositories, enabling you to store your policies securely and manage them using Git workflows. ## [](#%5Fprerequisites)Prerequisites Before you can use the Cerbos Hub GitHub integration, you need to have the following: * A GitHub account. * A GitHub repository where you want to store your policies. * Permission to add a GitHub App to your repository. ## [](#%5Fsetting%5Fup%5Fthe%5Fgithub%5Fintegration)Setting Up the GitHub Integration To set up the GitHub integration, follow these steps: 1. Go to the [Cerbos Hub](https://hub.cerbos.dev) and log in with your Cerbos account. 2. Inside a workspace, create a new policy store by clicking on "Policy Stores" in the sidebar. 3. In the Import tab, select "GitHub" as the source for your policy store. 4. Follow the prompts to authorize the Cerbos Hub to access your GitHub account. 5. Select the repository you want to use for storing your policies. 6. Configure the branch or tag for the integration to track, and optionally a directory where your policies will be stored.![GitHub connection setup](_images/policy_store_github_connection_setup.png) 7. Click "Save" to complete the setup. ## [](#%5Fusing%5Fthe%5Fgithub%5Fintegration)Using the GitHub Integration Once the GitHub integration is set up, you can monitor and manage your policies directly in the GitHub connection tab. The integration will automatically sync changes made to the policies in your GitHub repository. ![GitHub connection status](_images/policy_store_github_connection.png) To reconfigure the GitHub integration, you can click on the "Update configuration" button in the GitHub connection tab. This allows you to change the repository, branch, or directory settings. Policy stores: .NET SDK ==================== The .NET SDK for policy stores allows you to interact with policy stores programmatically using .NET. This SDK provides a set of functions and types that make it easy to upload, manage, and retrieve policies from a policy store. Find the .NET SDK on GitHub: Policy stores: Go SDK ==================== The Go SDK for policy stores allows you to interact with policy stores programmatically using Go. This SDK provides a set of functions and types that make it easy to upload, manage, and retrieve policies from a policy store. Find the Go SDK on GitHub: Policy stores: Java SDK ==================== The Java SDK for policy stores allows you to interact with policy stores programmatically using Java. This SDK provides a set of functions and types that make it easy to upload, manage, and retrieve policies from a policy store. Find the Java SDK on GitHub: Policy stores: JavaScript @cerbos/hub SDK ==================== The `@cerbos/hub` JavaScript SDK for policy stores allows you to interact with policy stores programmatically using JavaScript/TypeScript. This SDK provides a set of functions and types that make it easy to upload, manage, and retrieve policies from a policy store. Find the JavaScript SDK on GitHub: Policy stores: PHP SDK ==================== The PHP SDK for policy stores allows you to interact with policy stores programmatically using PHP. This SDK provides a set of functions and types that make it easy to upload, manage, and retrieve policies from a policy store. Find the PHP SDK on GitHub: Policy stores: Python SDK ==================== The Python SDK for policy stores allows you to interact with policy stores programmatically using Python. This SDK provides a set of functions and types that make it easy to upload, manage, and retrieve policies from a policy store. Find the Python SDK on GitHub: Policy stores: Rust SDK ==================== The Rust SDK for policy stores allows you to interact with policy stores programmatically using Rust. This SDK provides a set of functions and types that make it easy to upload, manage, and retrieve policies from a policy store. Find the Rust SDK on GitHub: Policy stores: Browser Upload ==================== The browser upload feature allows you to manually upload policy files directly to a policy store using the Cerbos Hub web interface. This is useful for when you want to quickly add policies without using a Git repository or CI/CD pipeline. ## [](#%5Fuploading%5Fpolicies)Uploading Policies ![Browser upload](_images/policy_store_upload.png) To upload policies using the browser, follow these steps: 1. Go to the [Cerbos Hub](https://hub.cerbos.dev) and log in with your Cerbos account. 2. Inside a workspace, create a new policy store by clicking on "Policy Stores" in the sidebar. 3. In the Import tab, select "Browser upload" as the source for your policy store. 4. Click on the "Upload files" button to select a ZIP file of your policies or drag and drop the ZIP file into the designated area. 5. Once the file is uploaded, the policies will be processed and added to the policy store. 6. You can then view and manage the uploaded policies in the policy store Policies tab. | | Uploading policies via the browser does a full replace of the existing policies in the store. If you want to append or update specific policies, consider using a Git repository or CI/CD pipeline instead. | | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | Policy stores ==================== Policy stores are flexible containers for Cerbos policy files managed inside Cerbos Hub. A store decouples policy storage from any specific source control system, letting each team choose the workflow that best fits its needs while Cerbos Hub guarantees validation, versioning, and secure delivery to [deployments](deployments.html). ## [](#%5Fwhy%5Fuse%5Fpolicy%5Fstores)Why use policy stores Clear boundaries and ownership Create one store per team, product, tenant, or environment so each group owns just the policies that matter to them, reducing cognitive load. Independent update cadence Teams can update their store at any time without blocking others, a new deployment is built only when you choose to combine the stores. Layered policy logic Combine multiple stores in a single deployment to apply global guard rails, platform defaults, application-level rules, and tenant overrides in a predictable hierarchy. Source agnostic workflows Populate a store from Git, a CI pipeline, the Cerbos Hub API, the [cerbosctl CLI](../cerbos/latest/cli/cerbosctl.html), or a direct UI upload, no GitHub lock-in. Full visibility and auditability View every policy file in Hub, see which store and commit contributed to a deployment, and trace any PDP decision back to the exact policy version. ## [](#%5Fsupported%5Fingestion%5Fmethods)Supported ingestion methods | Method | Typical use case | | ----------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | Git repository | Structured policy as code managed with pull requests and reviews. If you are using GitHub, you can connect your repository directly to a policy store in Cerbos Hub. | | CI or CD pipeline | Push the policies produced by a build job, for example when generating service-specific policies or promoting between environments. | | Cerbos Hub SDKs | Programmatic updates such as per-tenant roles, user-defined permissions, or event-driven changes. | | cerbosctl CLI | Local scripting, quick one-off uploads, or integration into existing tooling. | | Browser upload | Ad hoc tweaks, demos, or importing legacy policy files. | ## [](#%5Flife%5Fcycle%5Finside%5Fcerbos%5Fhub)Life-cycle inside Cerbos Hub 1. Create a store in **Cerbos Hub** workspace. 2. Add or update policy files using any supported ingestion method. 3. Automatic validation ensures policies are correctly formatted. 4. Reference the store in one or more deployments to build a versioned bundle. ## [](#%5Fbest%5Fpractices)Best practices Use meaningful names Name stores after their responsibility, for example security-global, platform-core, payments-service, or tenant-alpha. Keep tests with policies store-specific tests catch regressions early and run automatically on every change. Protect production stores Restrict write access and require review for stores that feed your production deployment. Isolate dynamic inputs Create a dedicated store for API-driven or user-defined policies to avoid mixing static and dynamic files. Review store composition Periodically confirm that each deployment references only the stores it needs and in the correct order. Release notes ==================== ## [](#%5F2025%5F04%5F28)2025-04-28 ### [](#%5Forganization%5Fdeletion)Organization deletion You can now delete an organization in Cerbos Hub. To delete an organization, navigate to the organization settings page and click on the "Delete organization" button. Please note that this action is irreversible and will permanently delete all data associated with the organization. ## [](#%5F2025%5F03%5F12)2025-03-12 ### [](#%5Fplayground)Playground The effective derived roles for a user are now displayed in the playground when evaluating policies. This feature helps you understand which derived roles were activated for that user during that request. ### [](#%5Fembedded%5Fpolicy%5Fdecision%5Fpoint)Embedded Policy Decision Point Time-based functions used in condition expressions such as `getHours` and `getMinutes` default to UTC unless the time zone is explicitly provided as an argument to the function. It’s recommended to review your policies to make sure that time calculations use the correct time zone. Refer to [timestamps documentation](../cerbos/latest/policies/conditions.html#%5Ftimestamps) to identify the affected functions. ## [](#%5F2025%5F02%5F26)2025-02-26 ### [](#%5Fembedded%5Fpolicy%5Fdecision%5Fpoint%5F2)Embedded Policy Decision Point We’ve introduced support for capturing audit decision logs from the Cerbos Hub Embedded Policy Decision Points (ePDP) using the latest version of the [Cerbos Javascript SDK](https://github.com/cerbos/cerbos-sdk-javascript). This feature enables organizations to track and analyze authorization decisions made locally in embedded environments, ensuring complete visibility and auditability, without relying on a centralized PDP or Cerbos Hub. ## [](#%5F2025%5F02%5F01)2025-02-01 The Builds section of Cerbos Hub has been renamed Policies. The Policies section now includes all the features previously available in Builds, such as policy versioning, policy history, and policy deployment. The Builds section has been removed from the Cerbos Hub navigation. ## [](#%5F2025%5F01%5F28)2025-01-28 ### [](#%5Fplayground%5F2)Playground Added support for [globals](../cerbos/latest/configuration/engine.html#%5Fglobals) in playground engine settings. Global variables defined in the [playground settings](playground.html) are exposed to policy conditions via the `globals` object. Reliability ==================== When a PDP is connected to Cerbos Hub it establishes a two-way communication channel. This is used to request the initial policy bundle from Cerbos Hub and then subsequently receive push notifications about new bundle versions. Because there is no polling involved, all PDPs in your environment will converge on a single version of policies much more quickly. We take the reliability and availability of Cerbos Hub very seriously. However, if for whatever reason Cerbos Hub API becomes unavailable, the PDPs will continue to work with the last downloaded bundle while trying to re-establish the connection in the background. New PDPs will also be able to start with the last successfully built bundle even if the Cerbos Hub API is unavailable because we serve those through a separate service. We recommend mounting a persistent storage disk to the Cerbos pod and pointing to it using the `storage.bundle.remote.cacheDir` configuration setting. This allows you to launch Cerbos with the `CERBOS_HUB_OFFLINE` environment variable set and the PDP will use the last cached bundle from the cache directory. In the worst-case scenario, you can switch your PDP to use the git storage driver and configure it to read the policies directly from the git repository that’s connected to Cerbos Hub. You can monitor whether a PDP is connected to Cerbos Hub using the `cerbos_dev_hub_connected` gauge in Prometheus metrics. Troubleshooting ==================== ## [](#%5Fbuilds%5Farent%5Ftriggered%5Fwhen%5Fmultiple%5Ftags%5Fare%5Fpushed%5Fto%5Fthe%5Frepository)Builds aren’t triggered when multiple tags are pushed to the repository GitHub has a [known limitation](https://docs.github.com/en/developers/webhooks-and-events/webhooks/webhook-events-and-payloads#push) where connected apps like Cerbos Hub won’t receive notifications for repository changes if more than three tags are pushed simultaneously. To avoid encountering this issue, we suggest setting the limit on maximum number of references to be pushed at once to three. You can find information on how to do this in the [GitHub documentation](https://docs.github.com/en/repositories/managing-your-repositorys-settings-and-features/managing-repository-settings/managing-the-push-policy-for-your-repository#limiting-how-many-branches-and-tags-can-be-updated-in-a-single-push). User management ==================== A Cerbos Hub user can have a role at the organization level and an optional set of roles for each workspace. All of these roles are considered when determining the permissions for any particular user. ## [](#%5Forganization%5Froles)Organization roles Organization roles, except for the `Member` role, apply to all workspaces within the organization. Users with organizational role of `Member` must be explicitly granted workspace roles in order to access a workspace. | Action | Owner | Developer | Analyst | Viewer | Member | | ----------------------------- | ----- | --------- | ------- | ------ | ------ | | View organization | ✅ | ✅ | ✅ | ✅ | ✅ | | Modify organization | ✅ | ❌ | ❌ | ❌ | ❌ | | Manage members | ✅ | ❌ | ❌ | ❌ | ❌ | | Invite a member | ✅ | ❌ | ❌ | ❌ | ❌ | | Create a workspace | ✅ | ✅ | ✅ | ✅ | ✅ | | Create a playground | ✅ | ✅ | ✅ | ✅ | ✅ | | Update a playground | ✅ | ✅ | ✅ | ✅ | ✅ | | Delete a playground | ✅ | ✅ | ✅ | ✅ | ✅ | | Export a playground | ✅ | ✅ | ✅ | ✅ | ✅ | | Connect a PDP to a playground | ✅ | ✅ | ✅ | ✅ | ✅ | ## [](#%5Fworkspace%5Froles)Workspace Roles Permissions assigned at the organization level are inherited by all workspaces. Additionally, a user can be assigned specific roles within a workspace, potentially granting more permissions for that particular workspace only. | Action | Owner | Developer | Analyst | Viewer | | ------------------------ | ----- | --------- | ------- | ------ | | View a workspace | ✅ | ✅ | ✅ | ✅ | | View builds | ✅ | ✅ | ✅ | ✅ | | View decision points | ✅ | ✅ | ✅ | ✅ | | View issues | ✅ | ✅ | ✅ | ✅ | | View audit logs | ✅ | ❌ | ✅ | ❌ | | Manage API keys | ✅ | ✅ | ❌ | ❌ | | Reset encryption key | ✅ | ❌ | ❌ | ❌ | | Manage workspace members | ✅ | ❌ | ❌ | ❌ | | Modify workspace | ✅ | ❌ | ❌ | ❌ | | Delete a workspace | ✅ | ❌ | ❌ | ❌ |