Skip to content

On-premises Installation On Linux (Manual, CentOS)

Setting up a single node

The following steps have to be performed to manually setup a Linux-based (CentOS 8) LogSentinel Node

  1. Download the logsentinel.jar artifact provided by LogSentinel and place it in /var/logsentinel
  2. Make sure the yum repos are available
  3. Execute the following commands
mkdir /var/logsentinel
cd /var/logsentinel/
# After download review and edit the passwords in the sample app.properties file
wget https://logsentinel-sandbox-public.s3-eu-west-1.amazonaws.com/node-setup/app.properties
wget https://logsentinel-sandbox-public.s3-eu-west-1.amazonaws.com/node-setup/logsentinel.conf
wget https://logsentinel-sandbox-public.s3-eu-west-1.amazonaws.com/node-setup/logsentinel-setup.sh
sudo chmod +x *.sh
sudo ./logsentinel-setup.sh 

Configuring a cluster

In order to configure a cluster, the following steps have to be executed.

Where "$BIND_IP" is used, the local address of the machine is meant, which can be obtained via:

`BIND_IP=`ifconfig | sed -En 's/127.0.0.1//;s/.*inet (addr:)?(([0-9]*\.){3}[0-9]*).*/\2/p'`

Get a list of all the IP addresses of individual nodes before you start.

Setting-up Cassandra

Do the following on all nodes:

  1. edit /etc/cassandra/conf/cassandra.yaml
  2. set the listen_addresss: $BIND_IP and rcp_address: $BIND_IP
  3. set seeds to be the list of IPs of all nodes
  4. set auto_bootstrap: false
  5. execute sudo service cassandra start
  6. execute sudo chkconfig cassandra on

Check if all nodes have joined the cluster using nodetool status (all should have status UN, meaning Up, Normal).

Finally, execute the following on one of the nodes cqlsh -e "CREATE KEYSPACE IF NOT EXISTS logsentinel WITH REPLICATION = { 'class' : 'SimpleStrategy', 'replication_factor' : 2 } AND DURABLE_WRITES = true;" $BIND_IP 9042 .

Setting-up Elasticsearch

Do the following on all nodes:

  1. sudo sed -i -- "s/#network.host: 192.168.0.1/network.host: $BIND_IP/g" /etc/elasticsearch/elasticsearch.yml
  2. set node.name to logsentinel-1, logsentinel-2, logsentinel-3 and so on.
  3. set discovery.seed_hosts to ["$BIND_IP_1", "$BIND_IP_2", "$BIND_IP_3"] (with the IPs of all nodes)
  4. set cluster.initial_master_nodes to ["logsentinel-1", "logsentinel-2", "logsentinel-3"] (and so on)
  5. sudo chkconfig --add elasticsearch
  6. sudo chkconfig elasticsearch on
  7. sudo service elasticsearch start

Wait until all ndoes are up and have joined the cluster. Check with curl http://$BIND_IP:9200/_cat/nodes

Setting-up LogSentinel

Do the following on all nodes:

  1. Edit /var/logsentinel/app.properties
  2. Set root.url to be the chosen domain name or IP of the load balancer
  3. Set cassandra.hosts=$BIND_IP_1,$BIND_IP_2,$BIND_IP_3 (with all node IPs)
  4. Set hazelcast.nodes=$BIND_IP_1,$BIND_IP_2,$BIND_IP_3 (with all node IPs)
  5. Set elasticsearch.url=http://$BIND_IP_1:9200,http://$BIND_IP_2:9200,http://$BIND_IP_3:9200 (with all node IPs)
  6. sudo service logsentinel start
  7. sudo chkconfig logsentinel on

Wait for a while until curl http://localhost:8080/login returns results.

Load Balancer setup

The Load balancer node is configured as follows:

sudo yum -y install nginx

sudo cat <<EOT >>  /etc/nginx/conf.d/logsentinel.conf
upstream backend {
   server $BIND_IP_1:8080 max_fails=3 fail_timeout=45s;
   server $BIND_IP_2:8080 max_fails=3 fail_timeout=45s;
   server $BIND_IP_3:8080 max_fails=3 fail_timeout=45s;
}

server {
    listen 80;
    listen 443 ssl;
    server_name $DOMAIN;

    location / {
      proxy_pass http://backend;
      proxy_set_header Host            \$host;
      proxy_set_header X-Forwarded-For \$remote_addr;
      proxy_set_header X-Forwarded-Proto \$scheme;
    }

    if (\$scheme = http) {
      return 301 https://\$server_name\$request_uri;
    }

    ssl_certificate /etc/ssl/certs/logsentinel.crt;
    ssl_certificate_key /etc/ssl/private/logsentinel.key;
    ssl_dhparam /etc/ssl/certs/dhparam.pem;

    ssl_session_cache shared:le_nginx_SSL:1m;
    ssl_session_timeout 1440m;

    ssl_protocols TLSv1.1 TLSv1.2;
    ssl_prefer_server_ciphers on;

    ssl_ciphers "ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS";
}
EOT

sudo openssl dhparam -out /etc/ssl/certs/dhparam.pem 2048

# only needed in case SELinux is present - allowing connecting to the app nodes
sudo setsebool -P httpd_can_network_connect 1

sudo service nginx start