Audit log
Tracking user activities, system events, and data changes with Red Hat Developer Hub audit logs
Abstract
Chapter 1. Audit logs in Red Hat Developer Hub
Audit logs are a chronological set of records documenting the user activities, system events, and data changes that affect your Red Hat Developer Hub users, administrators, or components. Administrators can view Developer Hub audit logs in the OpenShift Container Platform web console to monitor scaffolder events, changes to the RBAC system, and changes to the Catalog database. Audit logs include the following information:
- Name of the audited event
- Actor that triggered the audited event, for example, terminal, port, IP address, or hostname
- Event metadata, for example, date, time
-
Event status, for example,
success
,failure
-
Severity levels, for example,
info
,debug
,warn
,error
You can use the information in the audit log to achieve the following goals:
- Enhance security
- Trace activities, including those initiated by automated systems and software templates, back to their source. Know when software templates are executed, as well as the details of application and component installations, updates, configuration changes, and removals.
- Automate compliance
- Use streamlined processes to view log data for specified points in time for auditing purposes or continuous compliance maintenance.
- Debug issues
- Use access records and activity details to fix issues with software templates or plugins.
Audit logs are not forwarded to the internal log store by default because this does not provide secure storage. You are responsible for ensuring that the system to which you forward audit logs is compliant with your organizational and governmental regulations, and is properly secured.
Additional resources
- For more information about logging in OpenShift Container Platform, see About Logging
- For a complete list of fields that a Developer Hub audit log can include, see Section 3.1, “Audit log fields”
- For a list of scaffolder events that a Developer Hub audit log can include, see Section 3.2, “Scaffolder events”
Chapter 2. Configuring audit logs for Developer Hub on OpenShift Container Platform
Use the OpenShift Container Platform web console to configure the following OpenShift Container Platform logging components to use audit logging for Developer Hub:
- Logging deployment
- Configure the logging environment, including both the CPU and memory limits for each logging component. For more information, see Red Hat OpenShift Container Platform - Configuring your Logging deployment.
- Logging collector
-
Configure the
spec.collection
stanza in theClusterLogging
custom resource (CR) to use a supported modification to the log collector and collect logs fromSTDOUT
. For more information, see Red Hat OpenShift Container Platform - Configuring the logging collector. - Log forwarding
-
Send logs to specific endpoints inside and outside your OpenShift Container Platform cluster by specifying a combination of outputs and pipelines in a
ClusterLogForwarder
CR. For more information, see Red Hat OpenShift Container Platform - Enabling JSON log forwarding and Red Hat OpenShift Container Platform - Configuring log forwarding.
2.1. Forwarding Red Hat Developer Hub audit logs to Splunk
You can use the Red Hat OpenShift Logging (OpenShift Logging) Operator and a ClusterLogForwarder
instance to capture the streamed audit logs from a Developer Hub instance and forward them to the HTTPS endpoint associated with your Splunk instance.
Prerequisites
- You have a cluster running on a supported OpenShift Container Platform version.
-
You have an account with
cluster-admin
privileges. - You have a Splunk Cloud account or Splunk Enterprise installation.
Procedure
- Log in to your OpenShift Container Platform cluster.
Install the OpenShift Logging Operator in the
openshift-logging
namespace and switch to the namespace:Example command to switch to a namespace
oc project openshift-logging
Create a
serviceAccount
namedlog-collector
and bind thecollect-application-logs
role to theserviceAccount
:Example command to create a
serviceAccount
oc create sa log-collector
Example command to bind a role to a
serviceAccount
oc create clusterrolebinding log-collector --clusterrole=collect-application-logs --serviceaccount=openshift-logging:log-collector
-
Generate a
hecToken
in your Splunk instance. Create a key/value secret in the
openshift-logging
namespace and verify the secret:Example command to create a key/value secret with
hecToken
oc -n openshift-logging create secret generic splunk-secret --from-literal=hecToken=<HEC_Token>
Example command to verify a secret
oc -n openshift-logging get secret/splunk-secret -o yaml
Create a basic `ClusterLogForwarder`resource YAML file as follows:
Example `ClusterLogForwarder`resource YAML file
apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging
For more information, see Creating a log forwarder.
Define the following
ClusterLogForwarder
configuration using OpenShift web console or OpenShift CLI:Specify the
log-collector
asserviceAccount
in the YAML file:Example
serviceAccount
configurationserviceAccount: name: log-collector
Configure
inputs
to specify the type and source of logs to forward. The following configuration enables the forwarder to capture logs from all applications in a provided namespace:Example
inputs
configurationinputs: - name: my-app-logs-input type: application application: includes: - namespace: my-developer-hub-namespace containerLimit: maxRecordsPerSecond: 100
For more information, see Forwarding application logs from specific pods.
Configure outputs to specify where the captured logs are sent. In this step, focus on the
splunk
type. You can either usetls.insecureSkipVerify
option if the Splunk endpoint uses self-signed TLS certificates (not recommended) or provide the certificate chain using a Secret.Example
outputs
configurationoutputs: - name: splunk-receiver-application type: splunk splunk: authentication: token: key: hecToken secretName: splunk-secret index: main url: 'https://my-splunk-instance-url' rateLimit: maxRecordsPerSecond: 250
For more information, see Forwarding logs to Splunk in OpenShift Container Platform documentation.
Optional: Filter logs to include only audit logs:
Example
filters
configurationfilters: - name: audit-logs-only type: drop drop: - test: - field: .message notMatches: isAuditLog
For more information, see Filtering logs by content in OpenShift Container Platform documentation.
Configure pipelines to route logs from specific inputs to designated outputs. Use the names of the defined inputs and outputs to specify multiple
inputRefs
andoutputRefs
in each pipeline:Example
pipelines
configurationpipelines: - name: my-app-logs-pipeline detectMultilineErrors: true inputRefs: - my-app-logs-input outputRefs: - splunk-receiver-application filterRefs: - audit-logs-only
Run the following command to apply the
ClusterLogForwarder
configuration:Example command to apply
ClusterLogForwarder
configurationoc apply -f <ClusterLogForwarder-configuration.yaml>
Optional: To reduce the risk of log loss, configure your
ClusterLogForwarder
pods using the following options:Define the resource requests and limits for the log collector as follows:
Example
collector
configurationcollector: resources: requests: cpu: 250m memory: 64Mi ephemeral-storage: 250Mi limits: cpu: 500m memory: 128Mi ephemeral-storage: 500Mi
Define
tuning
options for log delivery, includingdelivery
,compression
, andRetryDuration
. Tuning can be applied per output as needed.Example
tuning
configurationtuning: delivery: AtLeastOnce 1 compression: none minRetryDuration: 1s maxRetryDuration: 10s
- 1
AtLeastOnce
delivery mode means that if the log forwarder crashes or is restarted, any logs that were read before the crash but not sent to their destination are re-sent. It is possible that some logs are duplicated after a crash.
Verification
- Confirm that logs are being forwarded to your Splunk instance by viewing them in the Splunk dashboard.
- Troubleshoot any issues using OpenShift Container Platform and Splunk logs as needed.
Chapter 3. Viewing audit logs in Developer Hub
Administrators can view, search, filter, and manage the log data from the Red Hat OpenShift Container Platform web console. You can filter audit logs from other log types by using the isAuditLog
field.
Prerequisites
- You are logged in as an administrator in the OpenShift Container Platform web console.
Procedure
- From the Developer perspective of the OpenShift Container Platform web console, click the Topology tab.
- From the Topology view, click the pod that you want to view audit log data for.
- From the pod panel, click the Resources tab.
- From the Pods section of the Resources tab, click View logs.
-
From the Logs view, enter
isAuditLog
into the Search field to filter audit logs from other log types. You can use the arrows to browse the logs containing theisAuditLog
field.
3.1. Audit log fields
Developer Hub audit logs can include the following fields:
eventName
- The name of the audited event.
actor
An object containing information about the actor that triggered the audited event. Contains the following fields:
actorId
-
The name/id/
entityRef
of the associated user or service. Can benull
if an unauthenticated user accesses the endpoints and the default authentication policy is disabled. ip
- The IP address of the actor (optional).
hostname
- The hostname of the actor (optional).
client
- The user agent of the actor (optional).
stage
-
The stage of the event at the time that the audit log was generated, for example,
initiation
orcompletion
. status
-
The status of the event, for example,
succeeded
orfailed
. meta
-
An optional object containing event specific data, for example,
taskId
. request
An optional field that contains information about the HTTP request sent to an endpoint. Contains the following fields:
method
- The HTTP method of the request.
query
-
The
query
fields of the request. params
-
The
params
fields of the request. body
-
The request
body
. Thesecrets
provided when creating a task are redacted and appear as*
. url
- The endpoint URL of the request.
response
An optional field that contains information about the HTTP response sent from an endpoint. Contains the following fields:
status
- The status code of the HTTP response.
body
- The contents of the request body.
isAuditLog
-
A flag set to
true
to differentiate audit logs from other log types. errors
-
A list of errors containing the
name
,message
and potentially thestack
field of the error. Only appears whenstatus
isfailed
.
3.2. Scaffolder events
Developer Hub audit logs can include the following scaffolder events:
ScaffolderParameterSchemaFetch
-
Tracks
GET
requests to the/v2/templates/:namespace/:kind/:name/parameter-schema
endpoint which return template parameter schemas ScaffolderInstalledActionsFetch
-
Tracks
GET
requests to the/v2/actions
endpoint which grabs the list of installed actions ScaffolderTaskCreation
-
Tracks
POST
requests to the/v2/tasks
endpoint which creates tasks that the scaffolder executes ScaffolderTaskListFetch
-
Tracks
GET
requests to the/v2/tasks
endpoint which fetches details of all tasks in the scaffolder. ScaffolderTaskFetch
-
Tracks
GET
requests to the/v2/tasks/:taskId
endpoint which fetches details of a specified task:taskId
ScaffolderTaskCancellation
-
Tracks
POST
requests to the/v2/tasks/:taskId/cancel
endpoint which cancels a running task ScaffolderTaskStream
-
Tracks
GET
requests to the/v2/tasks/:taskId/eventstream
endpoint which returns an event stream of the task logs of task:taskId
ScaffolderTaskEventFetch
-
Tracks
GET
requests to the/v2/tasks/:taskId/events
endpoint which returns a snapshot of the task logs of task:taskId
ScaffolderTaskDryRun
-
Tracks
POST
requests to the/v2/dry-run
endpoint which creates a dry-run task. All audit logs for events associated with dry runs have themeta.isDryLog
flag set totrue
. ScaffolderStaleTaskCancellation
- Tracks automated cancellation of stale tasks
ScaffolderTaskExecution
-
Tracks the
initiation
andcompletion
of a real scaffolder task execution (will not occur during dry runs) ScaffolderTaskStepExecution
-
Tracks
initiation
andcompletion
of a scaffolder task step execution ScaffolderTaskStepSkip
-
Tracks steps skipped due to
if
conditionals not being met ScaffolderTaskStepIteration
-
Tracks the step execution of each iteration of a task step that contains the
each
field.
3.3. Catalog events
Developer Hub audit logs can include the following catalog events:
CatalogEntityAncestryFetch
-
Tracks
GET
requests to the/entities/by-name/:kind/:namespace/:name/ancestry
endpoint, which returns the ancestry of an entity CatalogEntityBatchFetch
-
Tracks
POST
requests to the/entities/by-refs
endpoint, which returns a batch of entities CatalogEntityDeletion
-
Tracks
DELETE
requests to the/entities/by-uid/:uid
endpoint, which deletes an entity
If the parent location of the deleted entity is still present in the catalog, then the entity is restored in the catalog during the next processing cycle.
CatalogEntityFacetFetch
-
Tracks
GET
requests to the/entity-facets
endpoint, which returns the facets of an entity CatalogEntityFetch
-
Tracks
GET
requests to the/entities
endpoint, which returns a list of entities CatalogEntityFetchByName
-
Tracks
GET
requests to the/entities/by-name/:kind/:namespace/:name
endpoint, which returns an entity matching the specified entity reference, for example,<kind>:<namespace>/<name>
CatalogEntityFetchByUid
-
Tracks
GET
requests to the/entities/by-uid/:uid
endpoint, which returns an entity matching the unique ID of the specified entity CatalogEntityRefresh
-
Tracks
POST
requests to the/entities/refresh
endpoint, which schedules the specified entity to be refreshed CatalogEntityValidate
-
Tracks
POST
requests to the/entities/validate
endpoint, which validates the specified entity CatalogLocationCreation
-
Tracks
POST
requests to the/locations
endpoint, which creates a location
A location is a marker that references other places to look for catalog data.
CatalogLocationAnalyze
-
Tracks
POST
requests to the/locations/analyze
endpoint, which analyzes the specified location CatalogLocationDeletion
-
Tracks
DELETE
requests to the/locations/:id
endpoint, which deletes a location and all child entities associated with it CatalogLocationFetch
-
Tracks
GET
requests to the/locations
endpoint, which returns a list of locations CatalogLocationFetchByEntityRef
-
Tracks
GET
requests to the/locations/by-entity
endpoint, which returns a list of locations associated with the specified entity reference CatalogLocationFetchById
-
Tracks
GET
requests to the/locations/:id
endpoint, which returns a location matching the specified location ID QueriedCatalogEntityFetch
-
Tracks
GET
requests to the/entities/by-query
endpoint, which returns a list of entities matching the specified query
Chapter 4. Audit log file rotation in Red Hat Developer Hub
Logging to a rotating file in Red Hat Developer Hub is helpful for persistent storage of audit logs.
Persistent storage ensures that the file remains intact even after a pod is restarted. Audit log file rotation creates a new file at regular intervals, with only new data being written to the latest file.
- Default settings
Audit logging to a rotating file is disabled by default. When it is enabled, the default behavior changes to:
- Rotate logs at midnight (local system timezone).
-
Log file format:
redhat-developer-hub-audit-%DATE%.log
. -
Log files are stored in
/var/log/redhat-developer-hub/audit
. - No automatic log file deletion.
- No gzip compression of archived logs.
- No file size limit.
Audit logs are written in the /var/log/redhat-developer-hub/audit
directory.
- Log file names
- Audit log file names are in the following format:
redhat-developer-hub-audit-%DATE%.log
where %DATE%
is the format specified in auditLog.rotateFile.dateFormat
. You can customize file names when you configure file rotation.
- File rotation date and frequency
Supported
auditLog.rotateFile.frequency
options include:-
daily
: Rotate daily at 00:00 local time -
Xm
: Rotate everyX
minutes (where X is a number between 0 and 59) -
Xh
: Rotate everyX
hours (where X is a number between 0 and 23) -
test
: Rotate every 1 minute -
custom
: UsedateFormat
to set the rotation frequency (default if frequency is not specified)
-
If frequency
is set to Xh
, Xm
or test
, the dateFormat
setting must be configured in a format that includes the specified time component. Otherwise, the rotation might not work as expected.
For example, use dateFormat: 'YYYY-MM-DD-HH
for hourly rotation, and dateFormat: 'YYYY-MM-DD-HH-mm
for minute rotation.
Example minute rotation:
auditLog: rotateFile: # If you want to rotate the file every 17 minutes dateFormat: 'YYYY-MM-DD-HH-mm' frequency: '17m'
The dateFormat
setting configures both the %DATE%
in logFileName
and the file rotation frequency if frequency
is set to custom
. The default format is YYYY-MM-DD
, meaning daily rotation. Supported values are based on Moment.js formats.
If the frequency
is set to custom
, then rotations take place when the date string, which is represented in the specified dateFormat
, changes.
- Archive and delete
- By default, log files are not archived or deleted.
- Enable and configure audit file rotation
- If you are an administrator of Developer Hub, you can enable file rotation for audit logs, and configure the file log location, name format, frequency, log file size, retention policy, and archiving.
Example audit log file rotation configuration
auditLog: rotateFile: enabled: true 1 logFileDirPath: /custom-path 2 logFileName: custom-audit-log-%DATE%.log 3 frequency: '12h' 4 dateFormat: 'YYYY-MM-DD' 5 utc: false 6 maxSize: 100m 7 maxFilesOrDays: 14 8 zippedArchive: true 9
- 1
- Set
enabled
totrue
to use audit log file rotation. By default, it is set tofalse
. - 2
- Absolute path to the log file. The specified directory is created automatically if it does not exist.
- 3
- Default log file name format.
- 4
- If no frequency is specified, then the default file rotation occurs daily at 00:00 local time.
- 5
- Default date format.
- 6
- Set
utc
totrue
to use UTC time fordateFormat
instead of local time. - 7
- Sets a maximum file size limit for the audit log. In this example, the maximum size is 100m.
- 8
- If set to number of files, for example
14
, then it deletes the oldest log when there are more than 14 log files. If set to number of days, for example5d
, then it deletes logs older than 5 days. - 9
- Archive and compress rotated logs using
gzip
. The default value isfalse
.
-
By default, log files are not archived or deleted. If log deletion is enabled, then a
.<sha256 hash>-audit.json
is generated in the directory where the logs are to track generated logs. Any log file not contained in the directory is not subject to automatic deletion. -
A new
.<sha256 hash>-audit.json
file is generated each time the backend starts, which causes previous audit logs to stop being tracked or deleted, except for those still in use by the current backend.