Rules and Alerts¶
Alerts are an important part of a SIEM system as they allow defining anomalous scenarios to be alerted for. Alert configuration is available from the "Alerts" menu. We support three types of rules that generate alerts:
- Correlation rules - specify specific sequences of events
- Behavior rules - specify rules for anomalous behavior over a period of time
- Machine learning - use unsupervised machine learning to detect anomalous behavior
There are hundreds of predefined correlation rules for various systems and use-cases. They can be searched and imported through the Correlation rules screen.
Each rule can be enabled or disabled. Only enabled rules are executed.
Alert destinations is LogSentinel SIEM's abstraction for "what happens when an alert triggers". They include sending notifications via multiple channels and executing automated response commands and playbooks. For more details, see the alert destinations page.
These rules create baseline for a given data source or a set of data sources and monitor for deviation from that baseline.
They are defined via a wizard and can be expressed as follows: trigger the alert if the number of actions in the last 5 minutes is bigger than 2 standard deviations compared to the last 2 hours; apply this only within working hours.
Supported comparison methods include standard deviation, mean value of a fixed constant. Alerts can be triggered if the observed value is eiher above, or below, or both above or below the expected normal value.
By default the number of log entries is taken, but for numeric parameters a sum or average can also be calculated.
Numeric parameters are all parameters can be parsed to a number, there's no need for additional configuration. If using the API, they are specified as GET parameters to log queries. These can include bytes processed, request/response times, etc.
Behavior rules can use custom filters on the selected data source, i.e. run the rule only a subset of data that matches a specific query.
Typical examples include:
- Anomalous log count - send an alert if there's a difference of more than 3 standard deviations from the observed base. The observed base is the "search period" in the wizard, whereas the period to compare is called "aggregation period"
- Anomalous log count by a particular actor - same as above, but select
actorIdas a "Group by field". That way entries are grouped by actorId in order to compare the activity
- Missing logs - look for at least 1 log entry over a configured period of time. If there's less than one (using FIXED comparison), trigger an alert
- Anomalous traffic volume - send an alert if the sum of the bytesProcessed parameter for web server access logs suddenly becomes too high
These rules allow for specifying a chain of events which trigger the alert. The rule is executed every X minutes (configurable).
A rule consists of a chain of criteria. If a criterion is met, the next one in the chain is consulted. If all criteria are met, the alert triggers. Each entry has the following properties:
- Actor/action/entity/param OR raw query - the combination of criteria (query) to match a particular event
- Count - "less than" or "more than" the specified amount of events. If at least one event should match the criteria, use "more than 0"
- Time frame - the timeframe in which the specified number of evnets (matching the specified action) should have occurred
An example correlation alert can be: "If you encounter a DELETE action outside working hours, trigger an alert". Or "After an UPDATE action, trigger an alert if there are more than 10 other update actions on the same entity within the next 2 minutes by the same actor".
A checkbox "Match actors" can be checked if all matched events should be performed by the same actor (typically user, host or IP). If not checked, subsequent events by multiple actors can trigger an alert.
For numeric parameters sums can be used as well. A sum action is executed at the end to additionally confirm whether the given alert should be fired or not. A sum action is a specific type of event that contains the numeric param in sum path (JSON Path or XPath). We calculate the sum of some numeric values on all previous entries of the sum type and compare it with the current entry. If current entry value is larger than percentage of the sum then alert is triggered. This is done for every entry in the time frame.
Correlation rules can be defined across multiple data sources with different types of events. For example, an alert from an IDS can be correlated with ActiveDirectory authentication, or a detected vulnerability can be correlated with increased web server activity, and so on. Each event (log entry, 3rd party alert, vulnerability scan, SNMP trap, network flow, honeypot reports) can be part of the correlation.
A correlation rule can have preconditions - i.e. other rules that must have triggered in order to execute the current rule. This allows combining rules and using base rules as building blocks.
For each data source, LogSentinel SIEM automatically creates a healthcheck rule that monitors sources for missing logs. In case a source stops sending logs, an alert is triggered, as missing logs might be an indicator of a compromise. Healthcheck rules can be then tweaked in terms of the required period of inactivity in order to account for sources that produce logs only occasionally.
Over 700 built-in rules exist for various systems. They can be searched and used as templates for creating active rules.
When creating a rule from a template, a set of data sources needs to be specified. After the rule is created, destinations should be specified as well.
Built-in templates are updated automatically for SaaS setups and downloaded automatically for on-premise setups.
Machine learning anomaly detection is configured per-organization and runs in the background. If anomalous behaviour is detected, users are notified.
In order to enable machine learning anomaly detection, go to the "Alerts/Anomaly detection" page and select for which data source it should be enabled. There are a few parameters to configure:
- Enable anomaly detection - by default ML anomaly detection is disabled for all data sources. Once enabled, it needs a period of at least 10 days of steady flow of data to get the model trained. Make sure you enable it after data ingestion has started
- Use original timestamp - this indicates which timestamp should be used by the algorithm. By default it uses the timestamp when the event was received by SentinelTrails; however in some cases an
originalEventTimestampcan be specified (depending on the sending/collection logic that may be more accurate)
- Use entry fields entityId and entityType for anomaly detection - some applications are able to send
entityIdfor each log entry. This is an important feature for the machine learning model, if available. For example, it's when the
entityIdare related to database records, where
entityTypeis the table name and
entityIdis the primary key of the table.
The algorithm used is Isolation forest, which is perfectly suited for datasets that are expected to have very few anomalies
The Alerts page contains all recent triggered alerts. They can be filtered by rule name, affected data sources, tags, or the type of the alert (e.g. whether it's a behavior rule violation, a correlation rule violation, threat intelligence match, leaked credentials or other). Healtcheck alerts (i.e. "missing logs from source") can be filtered out as well.
Each alert can be triaged. The triage allows for reviewing all events associated with the alert. After a triage is started, the alert is assigned to the user that picked it up.
After triage, the alert can be confirmed or rejected. In case there are automated actions to be executed after triage, they are run.
Clicking on the first column opens the dashboard, showing only the events that are part of the triggered alert. They can be filtered further as well as reported on via the reports tab.
Running on historical data¶
After being defined alerts trigger on real-time data by default. They can be manually executed over a selected period of time via the "Historical correlation" submenu in the Threat hunting menu.
An important configuration for alert rules are the business hours for the organization. They can be configured from the dedicated menu "Alerts/Working Hours" menu.
Working hours can be set globally for the organization, or individually per data source. This is needed as some applications may be used by employees working in multiple shifts or with clients in different timezones and have shifted working hours.
Public holidays can be manually added. That's optional, but some alerts may be triggered on holidays due to reduction of activity.
Rules can be defined to work either inside or outside working hours, as behavior differs significantly. Rules can be configured to work throughout the whole day, regardless of business hours as well.
Each alert is assigned a risk level. The risk level combines severity and priority and is based on multiple factors, including the risk level of the data source(s) involved, the risk level of the triggered rule, the number of impacted users or hosts and the number of impacted high-profile users or hosts. Most of these parameters (risk levels) are configurable in the edit dialogs, allowing for customizing the base formula.
Rules can be managed through API calls in addition to the UI wizard. The API reference is available here
Rules can be exported and imported one by one or in bulk through the menu above the list. The export is a ZIP file, containing a JSON representation of each rule.