On the master:

yum install salt-master -y

Ensure that your internal DNS server resolves salt.example.com (or whatever your domain is) to your salt-master.

service salt-master start

 

On your minion:

yum install salt-minion -y

service salt-minion start

Add an /etc/hosts entry for the salt-minion pointing to an IP address

 

 

Back on the master:

salt-key -L

This lists all minion keys - a new minion will be in the unaccepted section. You will need to accept the key to push updates to the minion. This can be done with:

salt-key -A

 

Top.sls

top.sls is the top level configuration file which specifies all the sls files that will be pushed out and to which clients.

NB> The format of these files is incredibly important and if it is not completely it will fail - i.e. top line has to be at the start of the line, second line has to be two spaces to the right of that, third line has to be two lines right of that.

[root@openvas salt]# cat top.sls

base:

  '*':

    - mypkgs

    - repo

    - limits

    - selinux

    - email

    - firewalld

    - jdk

    - iptables

    - sudoers

 

 

This runs the states specified.

[root@openvas salt]# cat jdk.sls

jdk:

  pkg.installed:

    - sources:

 

      - jdk: salt://rpms/jdk-8u66-linux-x64.rpm

 

[root@openvas salt]# ll rpms

total 153M

drwxr-xr-x 2 root root 4.0K Nov 10 11:12 .

drwxr-xr-x 9 root root 4.0K Nov 10 11:53 ..

 

-rw-r----- 1 root root 153M Nov 10 11:13 jdk-8u66-linux-x64.rpm

 

[root@openvas salt]# cat email.sls

email:

  file.managed:

  - name: /etc/aliases

  - source: salt://email/aliases

  - user: root

  - group: root

 

  - mode: 644

 

 [root@openvas salt]# cat iptables.sls

iptables:

  pkg.installed:

    - pkgs:

      - iptables-services

  service.running:

    - require:

      - file: /etc/sysconfig/iptables

  file.managed:

    - name: /etc/sysconfig/iptables

    - source: salt://iptables/iptables

    - user: root

    - group: root

    - mode: 644

 

Install a shell script with prerequisite files specified.

cat centrify.sls

centrify:

  pkg.installed:

    - sources:

      - centrifydc: salt://centrify/centrifydc-5.2.3-rhel4-x86_64.rpm

      - centrifydc-ldapproxy: salt://centrify/centrifydc-ldapproxy-5.2.3-rhel4-x86_64.rpm

      - centrifydc-nis: salt://centrify/centrifydc-nis-5.2.3-rhel4-x86_64.rpm

      - centrifydc-openssh: salt://centrify/centrifydc-openssh-6.7p1-5.2.3-rhel4-x86_64.rpm

 

cat centrify_join.sls

/tmp/centrifydc-install.cfg:

  file.managed:

    - source: salt://centrify/centrifydc-install.cfg

    - user: root

    - group: root

    - mode: 644

 

/tmp/centrify-suite.cfg:

  file.managed:

    - source: salt://centrify/centrify-suite.cfg

    - user: root

    - group: root

    - mode: 644

 

/tmp/adcheck-rhel4-x86_64:

  file.managed:

    - source: salt://centrify/adcheck-rhel4-x86_64

    - user: root

    - group: root

    - mode: 755

 

centrify_join:

   cmd.script:

     - require:

       - file: /tmp/centrifydc-install.cfg

       - file: /tmp/centrify-suite.cfg

       - file: /tmp/adcheck-rhel4-x86_64

     - source: salt://centrify/install.sh

     - user: root

     - group: root

     - shell: /bin/bash 

 

Add a bunch of repos:

NB> I set gpgcheck = 0 so that it wouldn't check.

repo:

  file.managed:

    - name: /etc/yum.repos.d/atomic.repo

    - source: salt://repo/atomic.repo

    - user: root

    - group: root

    - mode: 644

webmin_repo:

  file.managed:

    - name: /etc/yum.repos.d/webmin.repo

    - source: salt://repo/webmin.repo

    - user: root

    - group: root

    - mode: 644

rpmforge:

  file.managed:

    - name: /etc/yum.repos.d/rpmforge.repo

    - source: salt://repo/rpmforge.repo

    - user: root

    - group: root

    - mode: 644

 

pgdg:

  file.managed:

    - name: /etc/yum.repos.d/pgdg-94-centos.repo

    - source: salt://repo/pgdg-94-centos.repo

    - user: root

    - group: root

    - mode: 644

 

Push out to all clients with everything in top.sls:

salt '*' state.highstate

Push out to specific client with everything in top.sls:

salt 'test.example.com' state.highstate

You can also you any of the commands in the sls files directly.

Push out to all clients with specific commands:

salt '*' cmd.run "yum upgrade -y"

Push out to specific client with specific commands:

salt 'test.example.com' cmd.run "yum upgrade -y"

 

Install Postgresql94-server, initialise the DB, copy pg_hba.conf and postgresql.conf with your updates to /var/lib/pgsql/9.4/data directory, start the service and ensure it is running:

postgresql94:

  pkg.installed:

    - pkgs:

      - postgresql94-server

      - postgresql94-devel

      - postgresql94-contrib

  cmd.run:

    - name: '/usr/pgsql-9.4/bin/postgresql94-setup initdb'

 

/var/lib/pgsql/9.4/data/pg_hba.conf:

  file.managed:

    - name: /var/lib/pgsql/9.4/data/pg_hba.conf

    - source: salt://postgresql94/pg_hba.conf

    - user: postgres

    - group: postgres

    - mode: 600

 

/var/lib/pgsql/9.4/data/postgresql.conf:

  file.managed:

    - name: /var/lib/pgsql/9.4/data/postgresql.conf

    - source: salt://postgresql94/postgresql.conf

    - user: postgres

    - group: postgres

    - mode: 600

 

postgresql-9.4:

  service.running:

    - require:

      - file: /var/lib/pgsql/9.4/data/pg_hba.conf

 

      - file: /var/lib/pgsql/9.4/data/postgresql.conf

 

Troubleshooting:

If a minion gives off a "Not Connected" error when you try to run a salt update command from the master the cache may not have been updated. It can happen if you install Salt without adding a hostname to /etc/hosts - you can remove the cache by running:

 

salt '*' cmd.run 'rm -rm /var/cache/salt/minion/files/base/*'

 salt '*' saltutil.sync_all
 
First remove the key on the Master, add the hostname to /etc/hosts, remove the /etc/salt/minion_id and then restart salt on both the master and the minion.

 

FURTHER READING:

 
 
 

Go to top