Salt Formula

Salt is a new approach to infrastructure management. Easy enough to get running in minutes, scalable enough to manage tens of thousands of servers, and fast enough to communicate with them in seconds.

Salt delivers a dynamic communication bus for infrastructures that can be used for orchestration, remote execution, configuration management and much more.

Sample Metadata

Salt Master

Salt master with base formulas and pillar metadata backend

git:
  client:
    enabled: true
linux:
  system:
    enabled: true
salt:
  master:
    enabled: true
    command_timeout: 5
    worker_threads: 2
    base_environment: prd
    environment:
      prd:
        formula:
          service01:
            source: git
            address: 'git@git.domain.com/service01-formula.git'
            revision: master
          service02:
            source: pkg
            name: salt-formula-service02 
    pillar:
      engine: salt
      source:
        engine: git
        address: 'git@repo.domain.com:salt/pillar-demo.git'
        branch: 'master'

Salt master with reclass ENC metadata backend

git:
  client:
    enabled: true
linux:
  system:
    enabled: true
reclass:
  storage:
    enabled: true
    data_source:
      engine: git
      address:  'git@git.domain.com'
      branch: master
salt:
  master:
    enabled: true
    command_timeout: 5
    worker_threads: 2
    base_environment: prd
    environment:
      prd:
        formula:
          service01:
            source: git
            address: 'git@git.domain.com/service01-formula.git'
            revision: master
          service02:
            source: pkg
            name: salt-formula-service02
    pillar:
      engine: reclass
      reclass:
        storage_type: yaml_fs
        inventory_base_uri: /srv/salt/reclass
        propagate_pillar_data_to_reclass: False
        reclass_source_path: /tmp/reclass

Salt master with Architect ENC metadata backend

salt:
  master:
    enabled: true
    pillar:
      engine: architect
      project: project-name
      host: architect-api
      port: 8181
      username: salt
      password: password

Salt master with multiple ext_pillars

git:
  client:
    enabled: true
linux:
  system:
    enabled: true
reclass:
  storage:
    enabled: true
    data_source:
      engine: git
      branch: master
      address: 'https://github.com/salt-formulas/openstack-salt.git'
salt:
  master:
    enabled: true
    command_timeout: 5
    worker_threads: 2
    base_environment: prd
    pillar_safe_render_error: False
    #environment:
    # prd:
    #   formula:
    #     python:
    #       source: git
    #       address: 'https://github.com/salt-formulas/salt-formula-python.git'
    #       revision: master
    pillar:
      engine: composite
      reclass:
        # index: 1 is default value
        index: 1
        storage_type: yaml_fs
        inventory_base_uri: /srv/salt/reclass_encrypted
        class_mappings:
          - target: '/^cfg\d+/'
            class:  system.non-existing.class
        ignore_class_notfound: True
        ignore_class_regexp:
          - 'service.*'
          - '*.fluentd'
        propagate_pillar_data_to_reclass: False
      stack: # not yet implemented
        # https://docs.saltstack.com/en/latest/ref/pillar/all/salt.pillar.stack.html
        #option 1
        #path:
        #  - /path/to/stack.cfg
        #option 2
        pillar:environment:
          dev: path/to/dev/stasck.cfg
          prod: path/to/prod/stasck.cfg
        grains:custom:grain:
          value:
            - /path/to/stack1.cfg
            - /path/to/stack2.cfg
      saltclass:
        path: /srv/salt/saltclass
      nacl:
        # if order is provided 99 is used to compose "99-nacl" key name which is later used to order entries
        index: 99
      gpg: {}
      vault-1: # not yet implemented
        name: vault
        path: secret/salt
      vault-2: # not yet implemented
        name: vault
        path: secret/root
    vault: # not yet implemented
      # https://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.vault.html
      name: myvault
      url: https://vault.service.domain:8200
      auth:
          method: token
          token: 11111111-2222-3333-4444-555555555555
      policies:
          - saltstack/minions
          - saltstack/minion/{minion}
    nacl:
      # https://docs.saltstack.com/en/develop/ref/modules/all/salt.modules.nacl.html
      box_type: sealedbox
      sk_file: /etc/salt/pki/master/nacl
      pk_file: /etc/salt/pki/master/nacl.pub
      #sk: None
      #pk: None

Salt master with API

git:
  client:
    enabled: true
linux:
  system:
    enabled: true
salt:
  master:
    command_timeout: 5
    worker_threads: 2
    enabled: true
    source:
      engine: pkg
    pillar:
      engine: salt
      source:
        engine: local
    environment:
      prd:
        formula: {}
  api:
    enabled: true
    ssl:
      engine: salt
    bind:
      address: 0.0.0.0
      port: 8000

Salt master with defined user ACLs

git:
  client:
    enabled: true
linux:
  system:
    enabled: true
salt:
  master:
    command_timeout: 5
    worker_threads: 3
    enabled: true
    source:
      engine: pkg
    pillar:
      engine: salt
      source:
        engine: local
    environment:
      prd:
        formula: {}
    user:
      peter:
        enabled: true
        permissions:
        - 'fs.fs'
        - 'fs.\*'

Salt master with preset minions

salt:
  master:
    enabled: true
    minions:
    - name: 'node1.system.location.domain.com'

Salt master with pip based installation (optional)

salt:
  master:
    enabled: true
    ...
    source:
      engine: pip
      version: 2016.3.0rc2

Install formula through system package management

salt:
  master:
    enabled: true
    ...
    environment:
      prd:
        keystone:
          source: pkg
          name: salt-formula-keystone
        nova:
          source: pkg
          name: salt-formula-keystone
          version: 0.1+0~20160818133412.24~1.gbp6e1ebb
        postresql:
          source: pkg
          name: salt-formula-postgresql
          version: purged

Formula keystone is installed latest version and the formulas without version are installed in one call to aptpkg module. If the version attribute is present sls iterates over formulas and take action to install specific version or remove it. The version attribute may have these values [latest|purged|removed|<VERSION>].

Clone master branch of keystone formula as local feature branch

salt:
  master:
    enabled: true
    ...
    environment:
      dev:
        formula:
          keystone:
            source: git
            address: git@github.com:openstack/salt-formula-keystone.git
            revision: master
            branch: feature

Salt master with specified formula refs (for example for Gerrit review)

salt:
  master:
    enabled: true
    ...
    environment:
      dev:
        formula:
          keystone:
            source: git
            address: https://git.openstack.org/openstack/salt-formula-keystone
            revision: refs/changes/56/123456/1

Salt master with logging handlers

salt:
  master:
    enabled: true
    handler:
      handler01:
        engine: udp
        bind:
          host: 127.0.0.1
          port: 9999
  minion:
    handler:
      handler01:
        engine: udp
        bind:
          host: 127.0.0.1
          port: 9999
      handler02:
        engine: zmq
        bind:
          host: 127.0.0.1
          port: 9999

Salt engine definition for saltgraph metadata collector

salt:
  master:
    engine:
      graph_metadata:
        engine: saltgraph
        host: 127.0.0.1
        port: 5432
        user: salt
        password: salt
        database: salt

Salt engine definition for Architect service

salt:
  master:
    engine:
      architect:
        engine: architect
        project: project-name
        host: architect-api
        port: 8181
        username: salt
        password: password

Salt engine definition for sending events from docker events

salt:
  master:
    engine:
      docker_events:
        docker_url: unix://var/run/docker.sock

Salt master peer setup for remote certificate signing

salt:
  master:
    peer:
      ".*":
      - x509.sign_remote_certificate

Salt master backup configuration

salt:
  master:
    backup: true
    initial_data:
      engine: backupninja
      source: backup-node-host
      host: original-salt-master-id

Configure verbosity of state output (used for salt command)

salt:
  master:
    state_output: changes

Pass pillar render error to minion log

Note

When set to False this option is great for debuging. However it is not recomended for any production environment as it may contain templating data as passwords, etc… , that minion should not expose.

salt:
  master:
    pillar_safe_render_error: False

Event/Reactor Systems

Salt synchronise node pillar and modules after start

salt:
  master:
    reactor:
      salt/minion/*/start:
      - salt://salt/reactor/node_start.sls

Trigger basic node install

salt:
  master:
    reactor:
      salt/minion/install:
      - salt://salt/reactor/node_install.sls

Sample event to trigger the node installation

salt-call event.send 'salt/minion/install'

Run any defined orchestration pipeline

salt:
  master:
    reactor:
      salt/orchestrate/start:
      - salt://salt/reactor/orchestrate_start.sls

Event to trigger the orchestration pipeline

salt-call event.send 'salt/orchestrate/start' "{'orchestrate': 'salt/orchestrate/infra_install.sls'}"

Synchronise modules and pillars on minion start.

salt:
  master:
    reactor:
      'salt/minion/*/start':
      - salt://salt/reactor/minion_start.sls

Add and/or remove the minion key

salt:
  master:
    reactor:
      salt/key/create:
      - salt://salt/reactor/key_create.sls
      salt/key/remove:
      - salt://salt/reactor/key_remove.sls

Event to trigger the key creation

salt-call event.send 'salt/key/create' \
> "{'node_id': 'id-of-minion', 'node_host': '172.16.10.100', 'orch_post_create': 'kubernetes.orchestrate.compute_install', 'post_create_pillar': {'node_name': 'id-of-minion'}}"

Note

You can add pass additional orch_pre_create, orch_post_create, orch_pre_remove or orch_post_remove parameters to the event to call extra orchestrate files. This can be useful for example for registering/unregistering nodes from the monitoring alarms or dashboards.

The key creation event needs to be run from other machine than the one being registered.

Event to trigger the key removal

salt-call event.send 'salt/key/remove'

Encrypted Pillars

Note: NACL + below configuration will be available in Salt > 2017.7.

External resources:

Configure salt NACL module:

pip install --upgrade libnacl===1.5.2
salt-call --local nacl.keygen /etc/salt/pki/master/nacl

  local:
      saved sk_file:/etc/salt/pki/master/nacl  pk_file: /etc/salt/pki/master/nacl.pub
salt:
  master:
    pillar:
      reclass: *reclass
      nacl:
        index: 99
    nacl:
      box_type: sealedbox
      sk_file: /etc/salt/pki/master/nacl
      pk_file: /etc/salt/pki/master/nacl.pub
      #sk: None
      #pk: None

NACL encrypt secrets:

salt-call –local nacl.enc ‘my_secret_value’ pk_file=/etc/salt/pki/master/nacl.pub
hXTkJpC1hcKMS7yZVGESutWrkvzusXfETXkacSklIxYjfWDlMJmR37MlmthdIgjXpg4f2AlBKb8tc9Woma7q

# or salt-run nacl.enc ‘myotherpass’

ADDFD0Rav6p6+63sojl7Htfrncp5rrDVyeE4BSPO7ipq8fZuLDIVAzQLf4PCbDqi+Fau5KD3/J/E+Pw=

NACL encrypted values on pillar:

Use Boxed syntax NACL[CryptedValue=] to encode value on pillar:

my_pillar:
  my_nacl:
      key0: unencrypted_value
      key1: NACL[hXTkJpC1hcKMS7yZVGESutWrkvzusXfETXkacSklIxYjfWDlMJmR37MlmthdIgjXpg4f2AlBKb8tc9Woma7q]

NACL large files:

NACL within template/native pillars:

pillarexample:
user: root password1: {{salt.nacl.dec(‘DRB7Q6/X5gGSRCTpZyxS6hlbWj0llUA+uaVyvou3vJ4=’)|json}} cert_key: {{salt.nacl.dec_file(‘/srv/salt/env/dev/certs/example.com/cert.nacl’)|json}} cert_key2: {{salt.nacl.dec_file(‘salt:///certs/example.com/cert2.nacl’)|json}}

Salt Syndic

The master of masters

salt:
  master:
    enabled: true
    order_masters: True

Lower syndicated master

salt:
  syndic:
    enabled: true
    master:
      host: master-of-master-host
    timeout: 5

Syndicated master with multiple master of masters

salt:
  syndic:
    enabled: true
    masters:
    - host: master-of-master-host1
    - host: master-of-master-host2
    timeout: 5

Salt Minion

Simplest Salt minion setup with central configuration node


salt:
  minion:
    enabled: true
    master:
      host: config01.dc01.domain.com

Multi-master Salt minion setup

salt:
  minion:
    enabled: true
    masters:
    - host: config01.dc01.domain.com
    - host: config02.dc01.domain.com

Salt minion with salt mine options

salt:
  minion:
    enabled: true
    mine:
      interval: 60
      module:
        grains.items: []
        network.interfaces: []

Salt minion with graphing dependencies

salt:
  minion:
    enabled: true
    graph_states: true

Salt minion behind HTTP proxy

salt:
  minion:
    proxy:
      host: 127.0.0.1
      port: 3128

Salt minion to specify non-default HTTP backend. The default tornado backend does not respect HTTP proxy settings set as environment variables. This is useful for cases where you need to set no_proxy lists.

salt:
  minion:
    backend: urllib2

Salt minion with PKI certificate authority (CA)

salt:
  minion:
    enabled: true
    ca:
      salt-ca-default:
        common_name: Test CA Default
        country: Czech
        state: Prague
        locality: Zizkov
        days_valid:
          authority: 3650
          certificate: 90
        signing_policy:
          cert_server:
            type: v3_edge_cert_server
            minions: '*'
          cert_client:
            type: v3_edge_cert_client
            minions: '*'
          ca_edge:
            type: v3_edge_ca
            minions: '*'
          ca_intermediate:
            type: v3_intermediate_ca
            minions: '*'
      salt-ca-test:
        common_name: Test CA Testing
        country: Czech
        state: Prague
        locality: Karlin
        days_valid:
          authority: 3650
          certificate: 90
        signing_policy:
          cert_server:
            type: v3_edge_cert_server
            minions: '*'
          cert_client:
            type: v3_edge_cert_client
            minions: '*'
          ca_edge:
            type: v3_edge_ca
            minions: '*'
          ca_intermediate:
            type: v3_intermediate_ca
            minions: '*'
      salt-ca-alt:
        common_name: Alt CA Testing
        country: Czech
        state: Prague
        locality: Cesky Krumlov
        days_valid:
          authority: 3650
          certificate: 90
        signing_policy:
          cert_server:
            type: v3_edge_cert_server
            minions: '*'
          cert_client:
            type: v3_edge_cert_client
            minions: '*'
          ca_edge:
            type: v3_edge_ca
            minions: '*'
          ca_intermediate:
            type: v3_intermediate_ca
            minions: '*'
        ca_file: '/etc/test/ca.crt'
        ca_key_file: '/etc/test/ca.key'
        user: test
        group: test

Salt minion using PKI certificate

salt:
  #master:
  # enabled: true
  # accept_policy:
  #   open_mode
  # peer:
  #   '.*':
  #     - x509.sign_remote_certificate
  minion:
    enabled: true
    trusted_ca_minions:
     - cfg01
    cert:
      ceph_cert:
          alternative_names:
              IP:127.0.0.1,DNS:salt.ci.local,DNS:ceph.ci.local,DNS:radosgw.ci.local,DNS:swift.ci.local
          cert_file:
              /srv/salt/pki/ci/ceph.ci.local.crt
          common_name:
              ceph_mon.ci.local
          key_file:
              /srv/salt/pki/ci/ceph.ci.local.key
          country: CZ
          state: Prague
          locality: Karlin
          signing_cert:
              /etc/pki/ca/salt-ca-test/ca.crt
          signing_private_key:
              /etc/pki/ca/salt-ca-test/ca.key
          # Kitchen-Salt CI trigger `salt-call --local`, below attributes
          # can't be used as there is no required SaltMaster connectivity
          authority:
              salt-ca-test
          #host:
          #    salt.ci.local
          #signing_policy:
          #    cert_server
      proxy_cert:
          alternative_names:
              IP:127.0.0.1,DNS:salt.ci.local,DNS:proxy.ci.local
          cert_file:
              /srv/salt/pki/ci/prx.ci.local.crt
          common_name:
              prx.ci.local
          key_file:
              /srv/salt/pki/ci/prx.ci.local.key
          country: CZ
          state: Prague
          locality: Zizkov
          signing_cert:
              /etc/pki/ca/salt-ca-default/ca.crt
          signing_private_key:
              /etc/pki/ca/salt-ca-default/ca.key
          # Kitchen-Salt CI trigger `salt-call --local`, below attributes
          # can't be used as there is no required SaltMaster connectivity
          authority:
             salt-ca-default
          #host:
          #   salt.ci.local
          #signing_policy:
          #   cert_server
      test_cert:
          alternative_names:
              IP:127.0.0.1,DNS:salt.ci.local,DNS:test.ci.local
          cert_file:
              /srv/salt/pki/ci/test.ci.local.crt
          common_name:
              test.ci.local
          key_file:
              /srv/salt/pki/ci/test.ci.local.key
          country: CZ
          state: Prague
          locality: Cesky Krumlov
          signing_cert:
              /etc/test/ca.crt
          signing_private_key:
              /etc/test/ca.key
          # Kitchen-Salt CI trigger `salt-call --local`, below attributes
          # can't be used as there is no required SaltMaster connectivity
          authority:
             salt-ca-alt

Salt minion trust CA certificates issued by salt CA on a specific host (ie: salt-master node)

salt:
  minion:
    trusted_ca_minions:
      - cfg01

Salt Minion Proxy

Salt proxy pillar

salt:
  minion:
    proxy_minion:
      master: localhost
      device:
        vsrx01.mydomain.local:
          enabled: true
          engine: napalm
        csr1000v.mydomain.local:
          enabled: true
          engine: napalm

Note

This is pillar of the the real salt-minion

Proxy pillar for IOS device

proxy:
  proxytype: napalm
  driver: ios
  host: csr1000v.mydomain.local
  username: root
  passwd: r00tme

Note

This is pillar of the node thats not able to run salt-minion itself

Proxy pillar for JunOS device

proxy:
  proxytype: napalm
  driver: junos
  host: vsrx01.mydomain.local
  username: root
  passwd: r00tme
  optional_args:
    config_format: set

Note

This is pillar of the node thats not able to run salt-minion itself

Salt SSH

Salt SSH with sudoer using key

git:
  client:
    enabled: true
linux:
  system:
    enabled: true
salt:
  master:
    command_timeout: 5
    worker_threads: 2
    enabled: true
    source:
      engine: pkg
    pillar:
      engine: salt
      source:
        engine: local
    environment:
      prd:
        formula: {}
    ssh:
      minion:
        node01:
          host: 10.0.0.1
          user: saltssh
          sudo: true
          key_file: /path/to/the/key
          port: 22

Salt SSH with sudoer using password

git:
  client:
    enabled: true
linux:
  system:
    enabled: true
salt:
  master:
    command_timeout: 5
    worker_threads: 2
    enabled: true
    source:
      engine: pkg
    pillar:
      engine: salt
      source:
        engine: local
    environment:
      prd:
        formula: {}
    ssh:
      minion:
        node01:
          host: 10.0.0.1
          user: saltssh
          sudo: true
          password: password
          port: 22

Salt SSH with root using password

git:
  client:
    enabled: true
linux:
  system:
    enabled: true
salt:
  master:
    command_timeout: 5
    worker_threads: 2
    enabled: true
    source:
      engine: pkg
    pillar:
      engine: salt
      source:
        engine: local
    environment:
      prd:
        formula: {}
    ssh:
      minion:
        node01:
          host: 10.0.0.1
          user: root
          password: password
          port: 22

Salt control (cloud/kvm/docker)

Salt cloud with local OpenStack provider

salt:
  control:
    enabled: true
    cloud_enabled: true
    provider:
      openstack_account:
        engine: openstack
        insecure: true
        region: RegionOne
        identity_url: 'https://10.0.0.2:35357'
        tenant: project 
        user: user
        password: 'password'
        fixed_networks:
        - 123d3332-18be-4d1d-8d4d-5f5a54456554e
        floating_networks:
        - public
        ignore_cidr: 192.168.0.0/16
    cluster:
      dc01_prd:
        domain: dc01.prd.domain.com
        engine: cloud
        config:
          engine: salt
          host: master.dc01.domain.com
        node:
          ubuntu1:
            provider: openstack_account
            image: Ubuntu14.04 x86_64
            size: m1.medium
          ubuntu2:
            provider: openstack_account
            image: Ubuntu14.04 x86_64
            size: m1.medium

Salt cloud with Digital Ocean provider

salt:
  control:
    enabled: true
    cloud_enabled: true
    provider:
      digitalocean_account:
        engine: digital_ocean
        region: New York 1
        client_key: xxxxxxx
        api_key: xxxxxxx
    cluster:
      dc01_prd:
        domain: dc01.prd.domain.com
        engine: cloud
        config:
          engine: salt
          host: master.dc01.domain.com
        node:
          ubuntu1:
            provider: digitalocean_account
            image: Ubuntu14.04 x86_64
            size: m1.medium
          ubuntu2:
            provider: digitalocean_account
            image: Ubuntu14.04 x86_64
            size: m1.medium

Salt virt with KVM cluster

virt:
  disk:
    three_disks:
      - system:
          size: 4096
          image: ubuntu.qcow
      - repository_snapshot:
          size: 8192
          image: snapshot.qcow
      - cinder-volume:
          size: 2048
salt:
  minion:
    enabled: true
    master:
      host: config01.dc01.domain.com
  control:
    enabled: true
    virt_enabled: true
    size:
      small:
        cpu: 1
        ram: 1
      medium:
        cpu: 2
        ram: 4
      large:
        cpu: 4
        ram: 8
      medium_three_disks:
        cpu: 2
        ram: 4
        disk_profile: three_disks
    cluster:
      vpc20_infra:
        domain: neco.virt.domain.com
        engine: virt
        config:
          engine: salt
          host: master.domain.com
        node:
          ubuntu1:
            provider: node01.domain.com
            image: ubuntu.qcow
            size: medium
          ubuntu2:
            provider: node02.domain.com
            image: bubuntu.qcomw
            size: small
          ubuntu3:
            provider: node03.domain.com
            image: meowbuntu.qcom2
            size: medium_three_disks

salt virt with custom destination for image file

virt:
  disk:
    three_disks:
      - system:
          size: 4096
          image: ubuntu.qcow
      - repository_snapshot:
          size: 8192
          image: snapshot.qcow
      - cinder-volume:
          size: 2048
salt:
  minion:
    enabled: true
    master:
      host: config01.dc01.domain.com
  control:
    enabled: true
    virt_enabled: true
    size:
      small:
        cpu: 1
        ram: 1
      medium:
        cpu: 2
        ram: 4
      large:
        cpu: 4
        ram: 8
      medium_three_disks:
        cpu: 2
        ram: 4
        disk_profile: three_disks
    cluster:
      vpc20_infra:
        domain: neco.virt.domain.com
        engine: virt
        config:
          engine: salt
          host: master.domain.com
        node:
          ubuntu1:
            provider: node01.domain.com
            image: ubuntu.qcow
            size: medium
            img_dest: /var/lib/libvirt/ssdimages
          ubuntu2:
            provider: node02.domain.com
            image: bubuntu.qcomw
            size: small
            img_dest: /var/lib/libvirt/hddimages
          ubuntu3:
            provider: node03.domain.com
            image: meowbuntu.qcom2
            size: medium_three_disks

Usage

Working with salt-cloud

salt-cloud -m /path/to/map --assume-yes

Debug LIBCLOUD for salt-cloud connection

export LIBCLOUD_DEBUG=/dev/stderr; salt-cloud --list-sizes provider_name --log-level all

Documentation and Bugs

To learn how to install and update salt-formulas, consult the documentation available online at:

In the unfortunate event that bugs are discovered, they should be reported to the appropriate issue tracker. Use Github issue tracker for specific salt formula:

For feature requests, bug reports or blueprints affecting entire ecosystem, use Launchpad salt-formulas project:

You can also join salt-formulas-users team and subscribe to mailing list:

Developers wishing to work on the salt-formulas projects should always base their work on master branch and submit pull request against specific formula.

Any questions or feedback is always welcome so feel free to join our IRC channel:

#salt-formulas @ irc.freenode.net