Audit block

This documentation is for an as-yet unreleased version of Cerbos. Choose 0.40.0 from the version picker at the top right or navigate to https://docs.cerbos.dev for the latest version.

The audit block configures the audit logging settings for the Cerbos instance. Audit logs capture access records and decisions made by the engine along with the associated context data.

Cerbos API responses include a cerbosCallId field that contains the unique identifier under which the request was logged to the audit log (if enabled) and the Cerbos activity log. It is recommended that applications log this ID as part of their activity logs too so that those log entries can be joined together with Cerbos logs during log analysis to build a complete picture of the authorization decisions.

Audit logging has some overhead in terms of resource usage (disk IO, CPU and memory). This overhead is usually negligible unless Cerbos is running in a resource-constrained environment. If resources are scarce or if you are expecting heavy traffic, disabling audit logging might have a positive impact on performance.
audit:
  accessLogsEnabled: false # AccessLogsEnabled defines whether access logging is enabled.
  backend: local # Backend states which backend to use for Audits.
  decisionLogFilters: # DecisionLogFilters define the filters to apply while producing decision logs.
    checkResources: # CheckResources defines the filters that apply to CheckResources calls.
      ignoreAllowAll: false # IgnoreAllowAll ignores responses that don't contain an EFFECT_DENY.
    planResources: # PlanResources defines the filters that apply to PlanResources calls.
      ignoreAll: false # IgnoreAll prevents any plan responses from being logged. Takes precedence over other filters.
      ignoreAlwaysAllow: false # IgnoreAlwaysAllow ignores ALWAYS_ALLOWED plans.
  decisionLogsEnabled: false # DecisionLogsEnabled defines whether logging of policy decisions is enabled.
  enabled: false # Enabled defines whether audit logging is enabled.
  excludeMetadataKeys: ['authorization'] # ExcludeMetadataKeys defines which gRPC request metadata keys should be excluded from the audit logs. Takes precedence over includeMetadataKeys.
  includeMetadataKeys: ['content-type'] # IncludeMetadataKeys defines which gRPC request metadata keys should be included in the audit logs.
  file:
    additionalPaths: [stdout] # AdditionalPaths to mirror the log output. Has performance implications. Use with caution.
    logRotation: # LogRotation settings (optional).
      maxFileAgeDays: 10 # MaxFileAgeDays sets the maximum age in days of old log files before they are deleted.
      maxFileCount: 10 # MaxFileCount sets the maximum number of files to retain.
      maxFileSizeMB: 100 # MaxFileSizeMB sets the maximum size of individual log files in megabytes.
    path: /path/to/file.log # Required. Path to the log file to use as output. The special values stdout and stderr can be used to write to stdout or stderr respectively.
  hub:
    advanced:
      bufferSize: 256
      flushInterval: 1s
      gcInterval: 60s
      maxBatchSize: 32
    mask: # Mask defines a list of attributes to exclude from the audit logs, specified as lists of JSONPaths
      checkResources:
        - inputs[*].principal.attr.foo
        - inputs[*].auxData
        - outputs
      metadata: ['authorization']
      peer:
        - address
        - forwarded_for
      planResources: ['input.principal.attr.nestedMap.foo']
    retentionPeriod: 168h # How long to keep records for
    storagePath: /path/to/dir # Path to store the data
  kafka:
    ack: all # Ack mode for producing messages. Valid values are "none", "leader" or "all" (default). Idempotency is disabled when mode is not "all".
    authentication: # Authentication
      tls:
        caPath: /path/to/ca.crt # Required. CAPath is the path to the CA certificate.
        certPath: /path/to/tls.cert # CertPath is the path to the client certificate.
        insecureSkipVerify: true # InsecureSkipVerify controls whether the server's certificate chain and host name are verified. Default is false.
        keyPath: /path/to/tls.key # KeyPath is the path to the client key.
        reloadInterval: 5m # ReloadInterval is the interval at which the TLS certificates are reloaded. The default is 0 (no reload).
    brokers: ['localhost:9092'] # Required. Brokers list to seed the Kafka client.
    clientID: cerbos # ClientID reported in Kafka connections.
    closeTimeout: 30s # CloseTimeout sets how long when closing the client to wait for any remaining messages to be flushed.
    compression: ['snappy'] # Compression sets the compression algorithm to use in order of priority. Valid values are "none", "gzip", "snappy","lz4", "zstd". Default is ["snappy", "none"].
    encoding: json # Encoding format. Valid values are "json" (default) or "protobuf".
    maxBufferedRecords: 1000 # MaxBufferedRecords sets the maximum number of records the client should buffer in memory in async mode.
    produceSync: false # ProduceSync forces the client to produce messages to Kafka synchronously. This can have a significant impact on performance.
    topic: cerbos.audit.log # Required. Topic to write audit entries to.
  local:
    advanced:
      bufferSize: 256
      flushInterval: 1s
      gcInterval: 60s
      maxBatchSize: 32
    retentionPeriod: 168h # How long to keep records for
    storagePath: /path/to/dir # Path to store the data
Including or excluding request metadata in log entries

To tune how request metadata (headers) is logged to access and decision log entries, configure includeMetadataKeys and excludeMetadataKeys as follows:

  • Both includeMetadataKeys and excludeMetadataKeys are empty: no metadata will be logged

  • Only includeMetadataKeys is defined: only the metadata keys in the list will be logged

  • Only excludeMetadataKeys is defined: everything except the keys defined in the list will be logged

  • Both includeMetadataKeys and excludeMetadataKeys are defined: Only the keys in the include list will be logged if, and only if, they are not in the exclude list

If requests contain sensitive data such as authorization tokens, they will be captured by the audit logs and visible to anyone with access to the log files. Cerbos automatically excludes the authorization header. However, if you use other header keys to store sensitive data, always exclude them using the excludeMetadataKeys configuration setting.

File backend

The file backend writes audit records as newline-delimited JSON to a file or stdout/stderr. With this backend you can use your existing log aggregation system (Datadog agent, Elastic agent, Fluentd, Graylog — to name a few) to collect, process and archive the audit data from all Cerbos instances.

This backend cannot be queried using the Admin API, cerbosctl audit or cerbosctl decisions.
Minimal configuration with file output and no log rotation
audit:
  enabled: true
  accessLogsEnabled: true
  decisionLogsEnabled: true
  backend: file
  file:
    path: /path/to/audit.log
Configuration with log rotation and output to both stdout and a file
audit:
  enabled: true
  accessLogsEnabled: true
  decisionLogsEnabled: true
  backend: file
  file:
    path: /path/to/file.log
    additionalPaths:
      - stdout
    logRotation:
      maxFileAgeDays: 10 # Maximum age in days of old log files before they are deleted.
      maxFileCount: 10 # Maximum number of old log files to retain.
      maxFileSizeMB: 100 # Maximum size of individual log files in megabytes.

The path field can be set to special names stdout or stderr to log to stdout or stderr. Note that this would result in audit logs being mixed up with normal Cerbos operational logs. It is recommended to use an actual file for audit log output if your container orchestrator has support for collecting logs from files in addition to stdout/stderr.

Audit log entries can be selected by setting a filter on log.logger == "cerbos.audit". Access log entries have log.kind == "access" and decision log entries have log.kind == "decision".

If log rotation is enabled, maxFileSizeMB is the only required setting. If maxFileCount and maxFileAgeDays settings are not defined, files are never deleted by the Cerbos process.

Hub backend

Requires a Cerbos Hub account.

Try Cerbos Hub

Securely sends audit logs to Cerbos Hub for aggregation and analysis. This vastly simplifies the work that would otherwise be required to configure and deploy a log aggregation solution to securely collect, store and query audit logs from across your fleet.

If you are new to Cerbos Hub, follow the getting started guide. For more information about configuring the PDP to send audit logs to Cerbos Hub, refer to the audit log collection documentation.

Kafka backend

The kafka backend writes audit records to a Kafka topic. By default, the messages are published asynchronously to the specified topic in JSON format. The message header named cerbos.audit.kind would have the value access for access log entries and decision for decision log entries.

You can configure the audit logger to produce data in the Protocol Buffers binary encoding format as well. The schema for messages is available at https://buf.build/cerbos/cerbos-api/docs/main:cerbos.audit.v1.

Minimal configuration
audit:
  enabled: true
  accessLogsEnabled: true
  decisionLogsEnabled: true
  backend: kafka
  kafka:
    brokers: ['broker1.kafka:9092', 'broker2.kafka:9092']
    topic: cerbos.audit.log
Full configuration
audit:
  enabled: true
  accessLogsEnabled: true
  decisionLogsEnabled: true
  backend: kafka
  kafka:
    ack: all # Ack mode for producing messages. Valid values are "none", "leader" or "all" (default). Idempotency is disabled when mode is not "all".
    authentication: # Authentication
      tls:
        caPath: /path/to/ca.crt # Required. CAPath is the path to the CA certificate.
        certPath: /path/to/tls.cert # CertPath is the path to the client certificate.
        insecureSkipVerify: true # InsecureSkipVerify controls whether the server's certificate chain and host name are verified. Default is false.
        keyPath: /path/to/tls.key # KeyPath is the path to the client key.
        reloadInterval: 5m # ReloadInterval is the interval at which the TLS certificates are reloaded. The default is 0 (no reload).
    brokers: ['localhost:9092'] # Required. Brokers list to seed the Kafka client.
    clientID: cerbos # ClientID reported in Kafka connections.
    closeTimeout: 30s # CloseTimeout sets how long when closing the client to wait for any remaining messages to be flushed.
    encoding: json # Encoding format. Valid values are "json" (default) or "protobuf".
    maxBufferedRecords: 1000 # MaxBufferedRecords sets the maximum number of records the client should buffer in memory in async mode.
    produceSync: false # ProduceSync forces the client to produce messages to Kafka synchronously. This can have a significant impact on performance.
    topic: cerbos.audit.log # Required. Topic to write audit entries to.
    compression: ['snappy'] # Compression sets the compression algorithm to use in order of priority. Valid values are "none", "gzip", "snappy","lz4", "zstd". Default is ["snappy", "none"].

Local backend

The local backend uses an embedded key-value store to save audit records. Records are preserved for seven days by default and can be queried using the Admin API, the cerbosctl audit command or the cerbosctl decisions text interface (TUI).

The only required setting for the local backend is the storagePath field which specifies the path on disk where the logs should be stored.

audit:
  enabled: true
  accessLogsEnabled: true
  decisionLogsEnabled: true
  backend: local
  local:
    storagePath: /path/to/dir
    retentionPeriod: 168h
    advanced:
      bufferSize: 16 # Size of the memory buffer. Increasing this will use more memory and the chances of losing data during a crash.
      maxBatchSize: 16 # Write batch size. If your records are small, increasing this will reduce disk IO.
      flushInterval: 30s # Time to keep records in memory before committing.
      gcInterval: 15m # How often the garbage collector runs to remove old entries from the log.