Configuring the infrastructure services

RabbitMQ

RabbitMQ single node

RabbitMQ as AMQP broker with admin user and vhosts

rabbitmq:
  server:
    enabled: true
    bind:
      address: 0.0.0.0
      port: 5672
    secret_key: rabbit_master_cookie
    admin:
      name: adminuser
      password: pwd
    plugins:
    - amqp_client
    - rabbitmq_management
    virtual_hosts:
    - enabled: true
      host: '/monitor'
      user: 'monitor'
      password: 'password'

RabbitMQ as a Stomp broker

rabbitmq:
  server:
    enabled: true
    secret_key: rabbit_master_cookie
    bind:
      address: 0.0.0.0
      port: 5672
    virtual_hosts:
    - enabled: true
      host: '/monitor'
      user: 'monitor'
      password: 'password'
    plugins:
    - rabbitmq_stomp

RabbitMQ cluster

RabbitMQ as base cluster node

rabbitmq:
  server:
    enabled: true
    bind:
      address: 0.0.0.0
      port: 5672
    secret_key: rabbit_master_cookie
    admin:
      name: adminuser
      password: pwd
  cluster:
    enabled: true
    role: master
    mode: disc
    members:
    - name: openstack1
      host: 10.10.10.212
    - name: openstack2
      host: 10.10.10.213

HA Queues definition

rabbitmq:
  server:
    enabled: true
    ...
    virtual_hosts:
    - enabled: true
      host: '/monitor'
      user: 'monitor'
      password: 'password'
      policies:
      - name: HA
        pattern: '^(?!amq\.).*'
        definition: '{"ha-mode": "all"}'

MySQL

MySQL database - simple

mysql:
  server:
    enabled: true
    version: '5.5'
    admin:
      user: root
      password: pwd
    bind:
      address: '127.0.0.1'
      port: 3306
    database:
      name:
        encoding: 'utf8'
        users:
        - name: 'username'
          password: 'password'
          host: 'localhost'
          rights: 'all privileges'

MySQL database - configured

mysql:
  server:
    enabled: true
    version: '5.5'
    admin:
      user: root
      password: pwd
    bind:
      address: '127.0.0.1'
      port: 3306
    key_buffer: 250M
    max_allowed_packet: 32M
    max_connections: 1000
    thread_stack: 512K
    thread_cache_size: 64
    query_cache_limit: 16M
    query_cache_size: 96M
    force_encoding: utf8
    database:
      name:
        encoding: 'utf8'
        users:
        - name: 'username'
          password: 'password'
          host: 'localhost'
          rights: 'all privileges'

Galera database cluster

Galera cluster master node

galera:
  master:
    enabled: true
    name: openstack
    bind:
      address: 192.168.0.1
      port: 3306
    members:
    - host: 192.168.0.1
      port: 4567
    - host: 192.168.0.2
      port: 4567
    admin:
      user: root
      password: pwd
    database:
      name:
        encoding: 'utf8'
        users:
        - name: 'username'
          password: 'password'
          host: 'localhost'
          rights: 'all privileges'

Galera cluster slave node

galera:
  slave:
    enabled: true
    name: openstack
    bind:
      address: 192.168.0.2
      port: 3306
    members:
    - host: 192.168.0.1
      port: 4567
    - host: 192.168.0.2
      port: 4567
    admin:
      user: root
      password: pass

Galera cluster - Usage

MySQL Galera check sripts

mysql> SHOW STATUS LIKE 'wsrep%';

mysql> SHOW STATUS LIKE 'wsrep_cluster_size' ;"

Galera monitoring command, performed from extra server

garbd -a gcomm://ipaddrofone:4567 -g my_wsrep_cluster -l /tmp/1.out -d
  1. salt-call state.sls mysql
  2. Comment everything starting wsrep* (wsrep_provider, wsrep_cluster, wsrep_sst)
  3. service mysql start
  4. run on each node mysql_secure_install and filling root password.
Enter current password for root (enter for none):
OK, successfully used password, moving on...

Setting the root password ensures that nobody can log into the MySQL
root user without the proper authorisation.

Set root password? [Y/n] y
New password:
Re-enter new password:
Password updated successfully!
Reloading privilege tables..
 ... Success!

By default, a MySQL installation has an anonymous user, allowing anyone
to log into MySQL without having to have a user account created for
them.  This is intended only for testing, and to make the installation
go a bit smoother.  You should remove them before moving into a
production environment.

Remove anonymous users? [Y/n] y
 ... Success!

Normally, root should only be allowed to connect from 'localhost'.  This
ensures that someone cannot guess at the root password from the network.

Disallow root login remotely? [Y/n] n
 ... skipping.

By default, MySQL comes with a database named 'test' that anyone can
access.  This is also intended only for testing, and should be removed
before moving into a production environment.

Remove test database and access to it? [Y/n] y
 - Dropping test database...
 ... Success!
 - Removing privileges on test database...
 ... Success!

Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.

Reload privilege tables now? [Y/n] y
 ... Success!

Cleaning up...
  1. service mysql stop
  2. uncomment all wsrep* lines except first server, where leave only in my.cnf wsrep_cluster_address=’gcomm://’;
  3. start first node
  4. Start third node which is connected to first one
  5. Start second node which is connected to third one
  6. After starting cluster, it must be change cluster address at first starting node without restart database and change config my.cnf.
mysql> SET GLOBAL wsrep_cluster_address='gcomm://10.0.0.2';

Metering database (Graphite)

  1. Set up the monitoring node for metering.
root@cfg01:~# salt 'mon01*' state.sls git,rabbitmq,postgresql
root@cfg01:~# salt 'mon01*' state.sls graphite,apache
  1. Make some manual adjustments.
root@mon01:~# service carbon-aggregator start
root@mon01:~# apt-get install python-django=1.6.1-2ubuntu0.11
root@mon01:~# service apache2 restart
  1. Update all client nodes in infrastructure for metrics service.
root@cfg01:~# salt "*" state.sls collectd.client
  1. Check the browser for the metering service output

Monitoring server (Sensu)

Instalation

  1. Set up the monitoring node.
root@cfg01:~# salt 'mon01*' state.sls git,rabbitmq,redis
root@cfg01:~# salt 'mon01*' state.sls sensu
  1. Update all client nodes in infrastructure.
root@cfg01:~# salt "*" state.sls sensu.client
  1. Update check defitions based on model on Sensu server.
root@cfg01:~# salt "*" state.sls sensu.client
root@cfg01:~# salt "*" state.sls salt
root@cfg01:~# salt "*" mine.flush
root@cfg01:~# salt "*" mine.update
root@cfg01:~# salt "*" service.restart salt-minion
root@cfg01:~# salt "mon*" state.sls sensu.server

# as 1-liner

salt "*" state.sls sensu.client; salt "*" state.sls salt.minion; salt "*" mine.flush; salt "*" mine.update; salt "*" service.restart salt-minion; salt "mon*" state.sls sensu.server

salt 'mon*' service.restart rabbimq-server; salt 'mon*' service.restart sensu-server; salt 'mon*' service.restart sensu-api; salt '*' service.restart sensu-client
  1. View the monitored infrastructure in web user interface.
http://185.22.97.69:8088

Creating checks

Check can be created in 2 separate ways.

Service driven checks

Checks are created and populated by existing services. Check definition is stored at formula_name/files/sensu.conf. For example SSH service creates check that checks running process.

local_openssh_server_proc:
  command: "PATH=$PATH:/usr/lib64/nagios/plugins:/usr/lib/nagios/plugins check_procs -a '/usr/sbin/sshd' -u root -c 1:1"
  interval: 60
  occurrences: 1
  subscribers:
  - local-openssh-server

Arbitrary check definitions

These checks are custom created from definition files located in system.sensu.server.checks, this class must be included in monitoring node definition.

parameters:
  sensu:
    server:
      checks:
      - name: local_service_name_proc
        command: "PATH=$PATH:/usr/lib64/nagios/plugins:/usr/lib/nagios/plugins check_procs -C service-name"
        interval: 60
        occurrences: 1
        subscribers:
        - local-service-name-server

Create file /etc/sensu/conf.d/check_graphite.json:

{
  "checks": {
    "remote_graphite_users": {
      "subscribers": [
        "remote-network"
      ],
      "command": "~/sensu-plugins-graphite/bin/check-graphite-stats.rb --host 127.0.0.1 --period -2mins --target 'default_prd.*.users.users'  --warn 1 --crit 2",
      "handlers": [
        "default"
      ],
      "occurrences": 1,
      "interval": 30
    }
  }
}

Restart sensu-server

root@mon01:~# service sensu-server restart