LogSentinel Collector Configuration¶
Configuration via UI¶
The LogSentinel Collector exposes a web-based UI on port 8070 that allows you to configure multiple sources quickly. The UI is synchronized with the YAML configuration so you don't have to choose one over the other.
Configuration file¶
Below is a full reference of the configuration options for the LogSentinel Collector.
If you don't specify a data source ID, a new data source is created and the ID is automatically set.
Supported connectors¶
Supported collector types (all of them except syslog, netflow, honeypot, networkMonitoring and ossec support a list of entries):
- file - watches one or more files and sends each new line as a separate event
- database - watches one or more tables using custom queries and sends events based on a comparison column (usually timestmap or sequential ID)
- mssqlAuditLog - watches MS SQL Server audit log (needs to be properly configured prior to starting the collector)
- mssqlChangeTracking - watches MS SQL Server change tracking details and sends each change
- mssqlEventLog - watches the MS SQL Server Windows logs
- mssqlLogin - watches for MSSQL Server login events
- windowsEventLog - watches Windows event log and sends each entry
- exchangeAdminLog - watches the admin log of Microsoft Exchange
- syslog - used to activate a syslog server that forwards syslog events to LogSentinel
- sapReadAccessLog - reading SAP's RAL (Read access log)
- netflow - used to activate a netflow v9 collector (partially compatible with IPFIX)
- snmp - used to receive SNMP traps
- oracle - configures auditing and FGA on Oracle DB and watches the DBA_AUDIT_TRAIL and FGA_LOGS tables (extending the database collector)
- postgre - watches a PostgreSQL database using pgaudit
- honeypot - acts as a decoy server on pre-defined services, prtocols and ports collecting credentials from malicious attempts
- networkMonitoring - listens to all packets on a network interfaces, transforms them to flows and sends them to the SIEM
- vsphere - watches VCenter events and system event logs
- ossec - receives messages from the OSSEC endpoint agent
- sshUser - watches remote server user activity using the "w" command
- directory - watches a given directory for changes (files created, deleted or modified)
- discovery - scans a given netowkr/cidr for for active IPs and every IP for running services (ssh, mysql...)
- vaultAuditLog - listens for HashiCorp Vault audit log messages
- leakedCredentials - watches mail server for email addresses to send for leaked credentials monitoring
- connectivity - periodically checks if a given host is reachable
- axonDb - used to interact with AxonDB modifications to turn them into audit trail entries
General configuration¶
# Data source ID (ApplicationId), OrganizationId and secret obtained from the API credentials page in the dashboard.
# The dataSourceId can be overridden per targetType
dataSourceId: ba2f0680-5424-11e8-b88d-6f2c1b6625e8
organizationId: ba2cbc90-5424-11e8-b88d-6f2c1b6625e8
secret: d8b63c3d82a6deb56b005a3b8617bf376b6aa6c181021abd0d37e5c5ac9911a1
# The type of event being sent by default
entryType: AUDIT_LOG
# The base URL to connect to. Change only for on-premise deployments
serverBaseUrl: https://api.logsentinel.com
# Keystore configurations. Use only if you need each request to be digitally signed.
keystorePath: /path/to/keystore.jks
keystorePassword: password
keystoreAlias: alias
# Configure whether the MAC address of the machine is send as a parameter attached to each event
includeMacAddress: false
# Configure whether the local IP address of the machine is sent as a parameter attached to each event
includeLocalIp: false
# Configure whether collectors that rely on timestamp to send events should start from events that
# happen after the collector is installed, or historical events should be consumed and sent as well
# (not applicable if historical data is not available)
timestampInitialUseCurrent: true
# Allows trusting self-signed certificates provided by the LogSentinel service.
# Use only for on-premise installations
trustSelfSignedCertificates: false
# Set this property if collector failover is needed. A stand-by collector is polling the main collector and is
# ready to take over collection in case the main one fails. Make sure both collectors have identical configuration
mainCollectorUrl: http://192.168.1.1:8070
# Use this property to turn on Collector forwarding. This allows multiple collectors to be installed on-premise
# which configure their logsentinelBaseUrl to point to the forwarding collector, which in turn forwards the logs to the SIEM server.
forwardingEnabled: false
# Allows authentication of users
authentication:
enabled: true
jwtSecret: testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest
jwtSecretVaultKey:
usernames:
- TestUsername
PowerShell-enabled connectors¶
Some connectors are using PowerShell to fetch the relevant logs. Others support PowerShell as an alternative to WMI. In all cases when remote powershell commands should be executed, make sure that the following command has been executed (as administrator) on the servers where PowerShell commands will be run: Enable-PSRemoting
File connector¶
The file connector can parse files in multiple formats (Common log format, delimited, JSON, XML, MySQL audit log, etc.) and retrieve them from multiple sources (Local, SSH, FTP, SMB).
file:
- dataSourceId: ... # override the default dataSourceId to send events to a custom one
format: DELIMITED # one of PLAIN, DELIMITED, REGEX, GROK, XML, JSON, COMMON_LOG_FORMAT, MYSQL_LOG, WINDOWS_DNS_SERVER_LOG, SAP_SECURITY_AUDIT_LOG, HADOOP_LOG, MONGODB_AUDIT_LOG, LINUX_AUDIT_LOG, DATABASE_QUERY_LOG, CUSTOM
sourceProtocol: LOCAL # defines how to retrieve the data, one of LOCAL, SSH, FTP, SMB, POWERSHELL
watchFilePaths: # list of files to watch
- /var/logs/system.log
sendLogsRate: 30000 # how often is data sent to the LogSentinel service
name: ... # a human-readable name for this config, useful when multiple configurations per type are used
fileConfig:
watchIntervalMillis: 30000 # period for checking the files for updates
supportRotation: true # whether log rotation is needed
compressionType: #GLIB or ZLIP - used for reading compressed audit log files
fromStart: false # whether watching should start from the end or from the start of the file
delimitedConfig: ... # see below
regexConfig: ... # see below
grokConfig: ... # see below
accessLogConfig: ... # see below
jsonConfig: ... # see below
xmlConfig: ... # see below
mysqlConfig: ... # see below
sshConfig: ... # see below
ftpConfig: ... # see below
powershellConfig: ... # see below
Supported file formats¶
- Delimited files
- Regex
- Grok
- Common log format
- JSON
- XML
- MySQL
- Windows DNS Server log
- Progress OpenEdge
- Custom (execute custom python script to parse the input)
- SAP Security Audit Log
- Hadoop log
- Linux audit log
- Database query log (with one logged query per line)
- MongoDB audit log
The first fix formats above support additional configuration, it can be specified as follows:
Delimited config¶
Parses text files that are delimtied with a delimiter (comma, tab, semicolon, space, pipe, etc.)
delimitedConfig:
separator: # if the file has columns, specify the column separator (;,\t)
csv: #true or false, whether the CSV syntax should be observed, e.g. escaping quotes;
actionIdx: #0-based index of the action column
actorIdx: #0-based index of the actor column
entityTypeIdx: #0-based index of the entityType column
entityIdIdx: #0-based index of the entityId column
Regex config¶
With the regex parser fields can be extracted by specifying (optional) regex patterns. The value obtained is the one from the first capturing group.
regexConfig:
actorIdRegex: regex
actionRegex: regex
entityTypeRegex: regex
entityIdRegex: regex
timestampRegex: regex
paramRegexes:
paramName1: regex
paramName2: regex
Grok config¶
Grok is a superset of regex (see here) that can be used for extracting structured data. In order to extract data via grok patterns, multiple patterns can be configured and the first one that matches the input line handles the extraction
grokConfig:
grokPatterns:
- pattern1
- pattern2
Example grok patterns for parsing linux auth.log:
%{SYSLOGTIMESTAMP:timestamp} %{SYSLOGHOST:host} sshd(?:\[%{POSINT:pid}\])?: (?<action>\w+ \w+) for (invalid user )?%{DATA:actorId} from %{IPORHOST:ip} port %{NUMBER:port} ssh2(: %{GREEDYDATA:signature})?
%{SYSLOGTIMESTAMP:timestamp} %{SYSLOGHOST:host} sshd(?:\[%{POSINT:pid}\])?: (?<action>\w+ \w+) user %{DATA:actorId} from %{IPORHOST:ip}
%{SYSLOGTIMESTAMP:timestamp} %{SYSLOGHOST:host} sshd(?:\[%{POSINT:pid}\])?: pam_unix\(sshd:auth\): %{GREEDYDATA:action}; %{GREEDYDATA:auth_params} rhost=%{IPORHOST:ip}\s+user=%{GREEDYDATA:actorId}
Common Log Format (Access log) config¶
Access log (e.g. NginX and Apache) based on the Common Log Format
If no accessLogconfig and accessLogFormat are specified, the default one is used: %h %l %u %t "%r" %>s %b "%{Referer}i" "%{User-agent}i"
accessLogConfig:
# the access log format in Common Log Format
accessLogFormat: format
accessLogIgnoredPaths: # option to ignore requests to a list of URIs
- path1
- path2
MySQL Audit Log config¶
MySQL Audit Log in both the new and old format is supported. The database credentials are needed for initialization purposes:
mysqlConfig:
- connectionString: jdbc:mysql://localhost:3306/ - database connection string
username: root database user
password: pass database password
Windows DNS server log¶
In order to collect Microsoft Windows DNS server logs that contains DNS queries and responses, you need to enable the text-file based debug log and the parse that log. One option is to fetch the log via a shared folder, accessible to the collector. The other option is to use an agent to send the log to the SIEM directly.
The way to configure the DNS log is described here. Below is the collector configuration that can be configured to fetch the logs via a UNC path.
file:
- format: WINDOWS_DNS_SERVER_LOG
sourceProtocol: SMB
watchFilePaths: # list of files to watch
- \\server1\logs\dns.log
Progress OpenEdge¶
In order to parse Progress OpenEdge log files, the only thing that should be configured is the type of parser. No additional configuration is required.
file:
- format: OPENEDGE
sourceProtocol: SMB
watchFilePaths: # list of files to watch
- \\server1\logs\openedge.lg
JSON config¶
Parse JSON-based log files where each line is a seprate JSON object. In addition to extracting data with JSONPath, regex can be used to further narrow down the required value within the values returned by the JSONPath.
jsonConfig:
actorIdPath: ... # JSONPath for actorId
actorIdRegex: ... # regex for actorId
actorDisplayNamePath: ... # JSONPath for actorDisplayName
actorDisplayNameRegex: ... # regex for actorDisplayName
actionPath: ... # JSONPath for action
actionRegex: ... # regex for action
entityTypePath: ... # JSONPath for entityType
entityTypeRegex: ... # regex for entityType
entityIdPath: ... # JSONPath for entityId
entityIdRegex: ... # regex for entityId
XML config¶
Parse XML-based log files where a log element is repeated. In addition to extracting data with XPath, regex can be used to further narrow down the required value within the values returned by the XPath.
xmlConfig:
logElementName: ... # the name of the opening log element, e.g. logrecord for <logrecord>
actorIdPath: ... # XPath for actorId
actorIdRegex: ... # regex for actorId
actorDisplayNamePath: ... # XPath for actorDisplayName
actorDisplayNameRegex: ... # regex for actorDisplayName
actionPath: ... # XPath for action
actionRegex: ... # regex for action
entityTypePath: ... # XPath for entityType
entityTypeRegex: ... # regex for entityType
entityIdPath: ... # XPath for entityId
entityIdRegex: ... # regex for entityId
Custom config¶
Execute a custom python script to parse each line. The script gets each line as an input and should write URL-encoded key-value pairs with parameters to the standard output (e.g. key1=value1&key2=value2). The following ones are reserved for the built-in fields: actorId
, action
, entityId
, entityType
, actorDisplayName
, originalEventTimestamp
. Other key-value pairs are added as params.
customConfig:
pythonScriptFilePath: ... # absolute path to python script
The scripts can be easily tested in two ways:
- using a test collector installation
- using a bash script that feeds each line of the target file to the python script (example)
Python scripts can be generated using a visual editor within the collector, based on Blockly.
File paths and local and SMB sources¶
The Local and SMB sources are interchangeable in some scenarios. Whenever a shared folder is accessed via its UNC path (e.g. \server1\file.log), the access is performed via SMB. This is a hidden implementation detail, and the folder seems to be accessed locally. Same goes for Linux, whenever a remote folder is mounted via SMB. You can choose "LOCAL" for truly local logs and SMB for remote.
However, if an SMB share requires credentials that are different than the local account on the collector machine, you can specify credentials for an SMB source. If different credenetials are required to access the target folder, use an SMB source and specify the credentials.
If you want to read an entires directory rather than just a file, the path should end with a slash.
SSH Source¶
Fetching remote files by executing tail -f
over SSH
sshConfig:
host: ... # the host (ip or hostname) to connect to
port: 22 # port to connect to, specify only if non-default is used
username: ... # ssh username
password: ... # ssh password
privateKey: ... # path the a private key (if needed)
privateKeyPassword: ... # password for the private key
FTP Source¶
Tailing remote files over FTP. This is not a recommended approach but is supported as some appliances and devices don't support other ways to deliver logs apart from their own FTP.
ftpConfig:
host: ...
port: ...
username: ...
password: ...
localPassiveMode: true
remotePassiveMode: true
asciiType: false # some FTP servers may need to have the asciiType to be se to true explicitly
Database connector¶
A generic connector to fetch logs from database tables
database:
- jdbcConnectionString: jdbc:mysql://192.168.1.101/db # database connection string
jdbcUsername: root # database username
jdbcPassword: pass # database password
sendLogsRate: 30000 # how often is data sent to the LogSentinel service
dataSourceId: ... # override the default dataSourceId to send events to a custom one
name: ... # a human-readable name for this config, useful when multiple configurations per type are used
watchSqlQueries: # list of queries to be executed against the database
- sql: select * from logs # SQL query
# which column is used for comparing entries. Only entries with value
# of this column above the value of the last sent event will be processed
criteriaColumn: timestamp
paginationClause: # a part of the where clause for pagination, can use ${pageSize}, ${from} and ${to} variables
pageSize: 200 # set the page size for pagination queries
actorDisplayNameColumn: actorDisplayName # column to get the actorDisplayName
actorIdColumns: actorId # comma separated columns that comprise the actorId
actionColumn: action # column to be used for the action
entityIdColumn: entityId # column to be used for entityId
entityTypeColumn: entityType # column to be used for entityType
entityTypeValue: entityType # a hardcoded value for entityType (alternative to specifying a column)
actionValue: action # a hardcoded value for the action (alternative to specifying a column)
- sql: select * from events
criteriaColumn: timestamp2
actorDisplayNameColumn: actorDisplayName2
actorIdColumns: actorId2 #comma separated
actionColumn: action2
entityIdColumn: entityId2
entityTypeColumn: entityType2
entityTypeValue: entityType2
actionValue: action2
Windows Event Log connector¶
Reading Windows event log locally or remotely (via native interface or via powershell)
windowsEventLog:
- name: ... # a human-readable name for this config, useful when multiple configurations per type are used
dataSourceId: ... # override the default dataSourceId to send events to a custom one
sourceTypes: # list of Windows event log types (Application, Security, System)
- Application
- Security
sources: # an optional allowlist of event log sources (providers) to be processed.
- Source1
- Source2
excludedSources: # a deny list of event sources (providers) not to be processed. Alternative to specifying "sources"
- ExecludedSource1
collectAllEvents: false # by default the collector fetches only a set of base events. Set this to true if all events are required
sendLogsRate: 30000 # how often is data sent to the LogSentinel service
mode: NATIVE # Defines whether to use the NATIVE Win32 API (default) or PowerShell commands (specified by POWERSHELL value)
remote: # connect to a remote event log. Leave user, domain and password blank to use current user
user:
domain:
password:
server: # ip address or domain name
authMethod: # auth constant, starting from 0 https://docs.microsoft.com/en-us/windows/win32/api/winevt/ne-winevt-evt_rpc_login_flags
Syslog connector¶
There is only one syslog connector that listens to both TCP and UDP and is capable of parsing any syslog message. In order to designate a source to a data source in the SIEM, a mapping between IP/host should be provided in the sourceConfigurations
property. The IP address or hostname should be the one provided in the syslog header, or if the host header is missing, the IP of the sender. The "default" key can be used for a fallback data source. Alternatively, the hostToDataSourceId
map can be used for simpler mapping. It doesn't support additional configuration options per source.
syslog:
sourceConfigurations: # a map of source IP(s) or hostnames to data source id(s). Use "default" to match "any IP/host"
- host: "172.10.2.12"
dataSourceId: <data-source-id>
- host: "device-hostname"
dataSourceId: <data-source-id>
excludeRegexes:
- regexToExclude
- host: "default"
dataSourceId: <fallback-data-source-id>
hostToDataSourceId: # simpler and less flexible way to configure mappings
"172.10.2.12": <data-source-id>
"device-hostname": <data-source-id>
"default": <fallback-data-source-id>
tcpPort: # TCP port to listen on, defaults to 2514
tcpRfc6587Port: # TCP port to listen on for RFC6587 messages, defaults to 2515
udpPort: # UDP port to listen to, defaults to 2516
queueSize: # the number of threads to queue incoming requests
sendLogsRate: # rate at which data is batched to the server (defaults to 0="immediate")
name: # human-readable name of the source
NetFlow/IPFIX connector¶
The collector can be configured to act as a NetFlow v9 and IPFIX collector
netFlow: # configuration for the NetFlow (v9) connector
name: ... # a human-readable name for this config, useful when multiple configurations per type are used
port: 2055 # the port at which the NetFlow v9 collector listens (default port for NetFlow is 2055)
hostToDataSourceId: # a map of source IP(s) or hostnames to data source id(s). Use "default" to match "any IP/host"
"172.10.2.12": <data-source-id>
"device-hostname": <data-source-id>
"default": <fallback-data-source-id>
sFlow connector¶
The collector can be configured to act as an sFlow collector
sFlow: # configuration for the sFlow connector
name: ... # a human-readable name for this config, useful when multiple configurations per type are used
port: 6343 # the port at which the sFlow collector listens (default port for sFlow is 6343)
hostToDataSourceId: # a map of source IP(s) to data source id(s). Use "default" to match "any IP"
"127.0.0.1": <some-data-source-id>
"default": <other-data-source-id>
SNMP connector¶
LogSentinel Collector can receive SNMP traps and forward them to the server
snmp:
hostToDataSourceId: # a map of source IP(s) to data source id(s). Use "default" to match "any IP"
"172.10.2.12": <some-data-source-id>
"default": <other-data-source-id>
port: 162 # change to override the default port
Exchange Admin Log connector¶
Reading Microsoft Exchange Admin log. For more information on the parameters, see the official Exchange documentation
exchangeAdminLog:
- exchangeUrl: # url of the exchange server
username: # username to connect with
password: # password to connect with (if username and password are not specified, the currenet account is used)
dateFormat: # override the default date format if needed in order to parse the dates
sendLogsRate: 30000 # how often is data sent to the LogSentinel service
dataSourceId: ... # override the default dataSourceId to send events to a custom one
name: ... # a human-readable name for this config, useful when multiple configurations per type are used
MS SQL Server connectors¶
LogSentinel Collector support several different MS SQL Server connectors. For enablign the preconditions for the various connectors, check the MS SQL Server page.
Audit Log Connector¶
mssqlAuditLog:
- jdbcConnectionString: jdbc:mysql://192.168.1.101/ # database connection string
jdbcUsername: sa # database username
jdbcPassword: pass # database password
sendLogsRate: 30000 # how often is data sent to the LogSentinel service
dataSourceId: ... # override the default dataSourceId to send events for
name: ... # a human-readable name for this config, useful when multiple configurations per type are used
# path where the MS SQL Server audit log file is stored.
# See https://docs.logsentinel.com/collector/sql-server/
mssqlLogsPath: c:\auditlog\
MS SQL Server Change Tracking connector¶
mssqlChangeTracking:
- jdbcConnectionString: jdbc:mysql://192.168.1.101/ # database connection string
jdbcUsername: sa # database username
jdbcPassword: pass # database password
sendLogsRate: 30000 # how often is data sent to the LogSentinel service
dataSourceId: ... # override the default dataSourceId to send events to a
name: ... # a human-readable name for this config, useful when multiple configurations per type are used
databases: # list of databases for which changes have to be tracked
- db1
- db2
# tables for which changes should be ignored (use the full table name, including database and schema)
ignoredTables:
- db1.dbo.table1
- db2.dbo.table3
# tables for which changes should be monitored. If not specified, all tables are monitored.
includedTables:
- db1.dbo.table1
- db2.dbo.table3
MS SQL Server Event Log connector¶
mssqlEventLog:
- ... # same properties as windowsEventLog
MS SQL Server Login connector¶
mssqlLogin:
- jdbcConnectionString: jdbc:mssql://192.168.1.101/db # database connection string
jdbcUsername: root # database username
jdbcPassword: pass # database password
sendLogsRate: 30000 # how often is data sent to the LogSentinel service
dataSourceId: ... # override the default dataSourceId to send events to a custom one
name: ... # a human-readable name for this config, useful when multiple
includedUsers: ... # a list of users to include (the rest are ignored)
compactDuplicateLogins: false # whether duplicate login events should be ignored
Oracle connector¶
The Oracle connector can be used to read Oracle audit logs. Refer to this page for the configurting the audit log.
oracle:
- jdbcConnectionString: jdbc:oracle:thin:@192.168.1.110:1521:orcl # database connection string
jdbcUsername: SYS AS SYSDBA # database username
jdbcPassword: pass # database password
sendLogsRate: 30000 # how often is data sent to the LogSentinel service
dataSourceId: ... # override the default dataSourceId to send events to a custom one
name: ... # a human-readable name for this config, useful when multiple configurations per type are used
fgaPolicies: # Fine Grained Auditing policies
- objectSchema: TESTUSER #schema
objectName: PERSONS #table name
policyName: testPolicy #policy name, necessary
auditCondition: Age != 0,
auditColumn: City,
enabled: true
statementTypes: SELECT, INSERT, UPDATE, DELETE
auditPolicies: # Standard audit policies
# available audit options see with SELECT * FROM SYS.STMT_AUDIT_OPTION_MAP
- userName: testUser # user to which the policy applies, leave empty to apply for all users
auditOption: UPDATE TABLE # Action that will be audited
- auditOption: SELECT TABLE
PostgreSQL connector¶
Watches a PostgreSQL database using pgaudit
postgre:
- name: ... # a human-readable name for this config, useful when multiple configurations per type are used
jdbcConnectionString: jdbc:postgresql://xxx.xxx.xxx:5432/postgres # the JDBC connection string
jdbcUsername: ... # postgre username
jdbcPassword: ... # postgre password
sendLogsRate: 30000 # how often is data sent to the LogSentinel service in millis
pgauditEnabled: ... # whether pgaudit is required or the native log file is used
schemas: # schemas to monitor for audit logs
- public
excludedTables: # an optional list of tables to exclude from processing
- ...
SSH User activitiy connector¶
This connector monitors the result of the w
command to remotely track user activity in a Linux system
sshUser:
- dataSourceId: c04bbd80-219e-11ea-bc18-c5a6448d7eee
name: ... # a human-readable name for this config, useful when multiple configurations per type are used
sendLogsRate: 30000 # how often is data sent to the LogSentinel service
sshConfig:
host: ... # the host (ip or hostname) to connect to
port: 22 # port to connect to, specify only if non-default is used
username: ... # ssh username
password: ... # ssh password
privateKey: ... # path the a private key (if needed)
privateKeyPassword: ... # password for the private key
Directory connector¶
This connector monitors directories for changes
directory:
- watchDirPath: /var/logs # directory to watch for changes
sendLogsRate: 30000
dataSourceId: ... # override the default dataSourceId to send events to a custom one
name: ... # a human-readable name for this config, useful when multiple configurations per type are used
useDocumentApi: false # whether to use the document API (more conventient for tracking documents)
sendHash: false # whether to send just the hash of the file rather than the whole body
maxFileSize: 0 # the max allowed file size to send to the server; otherwise a hash is sent; 0 means no limit (not that there's a server limit)
skipActorId: false # whether no attempt is made to extract the actorId for each even (auditing functionality must be turned on)
sendInBatches: true # whether to send the data in batches or in real time as events come
vShpere connector¶
vSphere:
- name: # a human-readable name for this config, useful when multiple configurations per type
username: # vCenter username
password: # vCenter password
serverName: # vCenter server name (IP or FQDN)
sendLogsRate: 30000 # how often is data sent to the LogSentinel service
Leaked credentials connector¶
Watches mail server for email addresses to send for leaked credentials monitoring.
leakedCredentials:
- username: # username used to connect to Exchange server
password: # password used to connect to Exchange server
groupName: # group name from which emails will be obtain
ldapDN: #
ldapProviderUrl: # url of the ldap server
ldapPrincipal: # DN if ldap server doesn't allow reading for anonymous users
ldapPassword: # password ldap server doesn't allow reading for anonymous users
ldapEnabled: false # 'true' when using LDAP, 'false' when using Exchange server
sendLogsRate: 259200000 # how often is data sent to the LogSentinel service (3 days)
SAP Read Access Log connector¶
sapReadAccessLog:
- jdbcConnectionString: jdbc:mssql://192.168.1.101/db # database connection string
jdbcUsername: root # database username
jdbcPassword: pass # database password
sendLogsRate: 30000 # how often is data sent to the LogSentinel service
dataSourceId: ... # override the default dataSourceId to send events to a custom one
name: ... # a human-readable name for this config, useful when multiple configurations per type are used
LDAP Query connector¶
Used to regularly execute queries against an LDAP server and transform the results to events (useful for scanning for certain conditions, e.g. users with non-expiring passwords, etc.)
ldapQuery:
- providerUrl: jdbc:mssql://192.168.1.101/db # database connection string
securityPrincipal: # username
securityCredentials: # password
sendLogsRate: 30000 # how often is data sent to the LogSentinel service
dataSourceId: ... # override the default dataSourceId to send events to a custom one
name: ... # a human-readable name for this config, useful when multiple configurations per type are used
Network Monitoring¶
The network monitoring connector does full packet capture on a specified network interface/adapter, transforms the data to flow data and extracts information of interest, like domain names and emails.
networkMonitoring:
name: # a human-readable name for this config, useful when multiple configurations per type
dataSourceId: # the ID of the datasource (taken from the data source configuration in the SIEM)
networkAdapter: # the name of the network adapter to listen to
promiscuousMode: false # whether to use promiscuous mode
Asset Discovery connector¶
Enabling asset discovery by specifying CIDR to scan
discovery:
- monitoredCidr #example cidr:v192.168.0.1/24
Change Data Capture (CDC) connector¶
Enabling change data capture using Debezium.
cdc:
# Full documentation for supported databases can be found here:
# https://debezium.io/docs/connectors/mysql/
# https://debezium.io/docs/connectors/postgresql/
# https://debezium.io/docs/connectors/oracle/
# https://debezium.io/docs/connectors/sqlserver/
# regexes to extract actor, action, entityId and entityType from stringified data provided by debezium
- actorRegex: .*
actionRegex: .*
entityIdRegex: .*
entityTypeRegex: .*
name: ... # a human-readable name for this config, useful when multiple configurations per type are used
# supported databases are MYSQL, ORACLE, POSTGRES, MSSQL
database: MYSQL
offsetFilename: offset.txt #path to file that stores current processed state
databaseHost: localhost
databasePort: 3306
databaseDbname: test # database name
databaseUser: user
databasePassword: password
databaseServerName: serverName # logical name, used to distingush different debezium instances (if any)
databaseHistoryFilename: history.txt # file where the connector will write and recover DDL statements
tableWhiteList: table1 # tables that will be monitored (only for some databases)
# Additional properties supported by debezium can be placed here. Not supported keys are ignored
# Can override existing hardcoded values of properties offset.flush.interval.ms and server.id
additionalProperties:
key1: value1
Honeypot¶
Enable a honeypot on various ports
honeypot:
dataSourceId: ... # override the default dataSourceId to send events to a custom one
testMode: false # testMode disables fetching scanner IPs(mainly for performance) and doesn't add firewall blocking rules
ignoredCIDR: ... # list of CIDR which wont be blocked if attempted a connection
protocols:
- SSH
- HTTPS
- HTTP
- RDP
- SMB
- FTP
OSSEC connector¶
The OSSEC connector serves as an OSSEC manager which receives messages from all the installed OSSEC/Wazuh agents.
ossec:
hostToDataSourceId: # a map of source IP(s) to data source id(s). Use "default" to match "any IP"
"172.10.2.12": <some-data-source-id>
"default": <other-data-source-id>
sendLogsRate: 30000
queueSize: 5
udpPort: 1514
agentKeys:
agentId: key
Vault Audit Log connector¶
Serving as HasiCorp Vault audit device
vaultAuditLog:
- port: 9090 # port to listen to
name: vault-logs # human-readable name
dataSourceId: ... # the id of the data source to send data for
Email connector¶
The email connector listens to incoming email and parses the messages as logs:
email:
#imap url - replace <username> and <password> with real ones. Replace imap.gmail.com for other providers
# Make sure Imap server accepts requests from outside (gmail does not by default)
imapInboxUrl: imap://<username>:<password>@imap.gmail.com/INBOX
actionRegex: .* # regex extracting action from subject or body
actionRegexSubject: true # search action in subject or body
entityIdRegex: .* # regex extracting entityId from subject or body
entityIdRegexSubject: false # search entityId in subject or body
entityTypeRegex: .* # regex extracting entityType from subject or body
entityTypeRegexSubject: false # search entityType in subject or body
name: ... # a human-readable name for this config, useful when multiple configurations per type are used
Connectivity connector¶
Connectivity connector periodically pings a given host(configured with host, port and protocol) to check if it is reachable, and sends logs to a configured datasource
connectivity:
dataSourceId: #Override the default dataSourceId to send events to a custom one
targets: #A list of objects, which consist of host, port and protocol(TCP/UDP)
- host: 127.0.0.1
port: 80
protocol: TCP
scanInterval: 2000 #Interval in ms at which the targets are pinged
AxonDB connector¶
LogSentinel Collector can plug into the AxonDB log
axonDB:
- trackingToken: 0 # AxonDB tracking token
action: LOG_AXON # Hardcoded action value
batchEnabled: false # Use batch queries
batchInterval: 10000 # Batch interval in case batch queries are enabled
dataSourceId: ... # override the default dataSourceId to send events to a custom one
name: ... # a human-readable name for this config, useful when multiple configurations per type are used
Enrichment configuration¶
Multiple enrichment sources can be specified to add parameters to each log entry.
For DHCP enrichment for Windows we require PowerShell remoting. In order to configure PowerShell remoting for non-admin users, check this guide
enrichment:
- enrichmentType: # ACTIVE_DIRECTORY, DATABASE, DHCP
logEntryLookupParamName: # name of the parameter (or actorId/action/entityId/entityType) to use to query to enrichment source
sourceLookupParamName: # name of the enrichment source parameter to match the log entry param name
enrichmentFields: # list of fields to get from the enrichment source and add to the log entry
credentials: # credentials to access the enrichment source
targetDataSourceIds: # sources to which enrichment should be applied. If left blank, it is applied to all
HTTPS Configuration¶
If the collector needs to be accessed via HTTPS (rather than plain HTTP), the following configuration options need to be set:
server.ssl.key-store=/path/to/keystore.jks
server.ssl.key-store-password=password
server.ssl.key-password=password
server.ssl.keyStoreType=JKS
server.ssl.keyAlias=cert
server.ssl.enabled-protocols=TLSv1.2
server.ssl.enabled=true
Instructions on generating the keystore can be found here .