Saltstack uses a server-agent setup (although working agentless is possible as well) and is Python based. Agents are actually called minions (not to be confused with the characters from Despicable Me), facts about minions are called grains, variables are kept in a pillar and states are what gets executed on the minions. As you can see, somebody had a field day thinking all that stuff up ...
![]() | |
http://www.deviantart.com/art/The-Art-of-Time-121274132 |
saltmaster - Ubuntu 14.04 LTS
saltminion01 - CentOS 6.8
Contrary to Ansible, Saltstack does run practically everything as root, so lets check we have the permissions for that. First on saltmaster :
saltmaster@saltmaster:~$ sudo apt-get install openssh-server
saltmaster@saltmaster:~$ sudo vi /etc/hosts
127.0.0.1 localhost
127.0.1.1 saltmaster
192.168.42.131 salt
Just to be clear, Saltstack (at least in the server-agent setup) does not require ssh, I'm just installing it to check my sudo permissions and because I do like to be able to access my hosts over ssh.
The new salt entry in the /etc/hosts file will allow us to use the saltmaster server as a minion too. Note that the ip is the ip of the saltmaster server itself.
Next on saltminion01 :
By default, no user is given sudo rights on CentOS 6.8, so add the user (saltminion) you will be doing salt-stuff with in the /etc/sudoers file. Note that you will require access to root to do that (chicken and egg problem there, on CentOS 7.x installations you can have a user with sudo rights by default ... which makes life somewhat easier).
[root@saltminion01 ~]# vi /etc/sudoers
...
saltminion ALL=(ALL) ALL
...
Also by default, openssh-server is installed on CentOS, but lets check that the sudo permissions are working :
[saltminion@saltminion01 ~]$ sudo yum install openssh-server
Nothing to do
[saltminion@saltminion01 ~]$ sudo vi /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
127.0.0.1 saltminion01
192.168.42.131 salt saltmaster
Same thing, the salt entry will allow the minion to find the master. So lets install Saltstack now. First on saltmaster :
saltmaster@saltmaster:~$ curl -L https://bootstrap.saltstack.com -o install_salt.sh
saltmaster@saltmaster:~$ sudo sh install_salt.sh -P -M
...
salt-master stop/waiting
salt-master start/running, process 7582
salt-minion stop/waiting
salt-minion start/running, process 7589
* INFO: Running daemons_running()
* INFO: Salt installed!
And next on saltminion01 :
[saltminion@saltminion01 ~]$ curl -L https://bootstrap.saltstack.com -o install_salt.sh
[saltminion@saltminion01 ~]$ sudo sh install_salt.sh -P
...
Complete!
* INFO: Running install_centos_stable_post()
* INFO: Running install_centos_check_services()
* INFO: Running install_centos_restart_daemons()
Starting salt-minion daemon: [ OK ]
* INFO: Running daemons_running()
* INFO: Salt installed!
We now have two minions (saltmaster itself and saltminion01). They need to be registered.
saltmaster@saltmaster:~$ sudo salt-key -A
The following keys are going to be accepted:
Unaccepted Keys:
saltminion01
saltmaster
Proceed? [n/Y] Y
Key for minion saltminion01 accepted.
Key for minion saltmaster accepted.
To conclude the installation we verify that the minions are accepting commands :
saltmaster@saltmaster:~$ sudo salt '*' test.ping
saltmaster:
True
saltminion01:
True
Nice, that works and wasn't all too complicated, was it ? Now, Saltstack keeps the configuration management definitions in simple YAML files. The next step is to tell the master where those files are :
saltmaster@saltmaster:~$ sudo vi /etc/salt/master
...
file_roots:
base:
- /home/saltmaster/salt/file/base
...
pillar_roots:
base:
- /home/saltmaster/salt/pillar/base
...
saltmaster@saltmaster:~$ sudo service salt-master restart
salt-master stop/waiting
salt-master start/running, process 15507
The files themselves need not be created with root, so they can live in the home-directory of the saltmaster-user and be managed by him. The actual definition (or state as they are called in Saltstack) files are in the ... salt/file/base directory, the pillar files (containing variables) are in the salt/pillar/base directory.
In this post I'll not use multiple environments, so only the default one, base, will be used.
Ansible has vaults to keep sensitive data hidden. Saltstack can work with gpg encrypted data. For that we need to generate a key though :
saltmaster@saltmaster:~$ sudo mkdir -p /etc/salt/gpgkeys
saltmaster@saltmaster:~$ sudo chmod 0700 /etc/salt/gpgkeys
saltmaster@saltmaster:~$ sudo gpg --gen-key --homedir /etc/salt/gpgkeys
...
gpg: keyring `/etc/salt/gpgkeys/secring.gpg' created
gpg: keyring `/etc/salt/gpgkeys/pubring.gpg' created
Please select what kind of key you want:
(1) RSA and RSA (default)
(2) DSA and Elgamal
(3) DSA (sign only)
(4) RSA (sign only)
Your selection?
RSA keys may be between 1024 and 4096 bits long.
What keysize do you want? (2048)
Requested keysize is 2048 bits
Please specify how long the key should be valid.
0 = key does not expire
<n> = key expires in n days
<n>w = key expires in n weeks
<n>m = key expires in n months
<n>y = key expires in n years
Key is valid for? (0)
Key does not expire at all
Is this correct? (y/N) y
...
Real name: Salt Master
Email address: saltmaster@saltmaster
Comment: Salt Master
You selected this USER-ID:
"Salt Master (Salt Master) <saltmaster@saltmaster>"
Change (N)ame, (C)omment, (E)mail or (O)kay/(Q)uit? O
You need a Passphrase to protect your secret key.
You don't want a passphrase - this is probably a *bad* idea!
I will do it anyway. You can change your passphrase at any time,
using this program with the option "--edit-key".
gpg: /etc/salt/gpgkeys/trustdb.gpg: trustdb created
gpg: key 66D7B662 marked as ultimately trusted
public and secret key created and signed.
gpg: checking the trustdb
gpg: 3 marginal(s) needed, 1 complete(s) needed, PGP trust model
gpg: depth: 0 valid: 1 signed: 0 trust: 0-, 0q, 0n, 0m, 0f, 1u
pub 2048R/66D7B662 2016-12-29
Key fingerprint = 178B DF56 601E 2B34 B720 ADF1 D5EA 9462 66D7 B662
uid Salt Master (Salt Master) <saltmaster@saltmaster>
sub 2048R/CBFAD0F5 2016-12-29
And our saltmaster-user needs to import the key (you have to replace the 66D7B662 in the below with the key that you generated on your system obviously) :
saltmaster@saltmaster:~$ sudo gpg --homedir /etc/salt/gpgkeys --armor --export 66D7B662 > /var/tmp/exported_pubkey.gpg
saltmaster@saltmaster:~$ gpg --import /var/tmp/exported_pubkey.gpg
gpg: /home/saltmaster/.gnupg/trustdb.gpg: trustdb created
gpg: key 66D7B662: public key "Salt Master (Salt Master) <saltmaster@saltmaster>" imported
gpg: Total number processed: 1
gpg: imported: 1 (RSA: 1)
We are all set up and ready to go now. Lets start with something simple.
saltmaster@saltmaster:~$ mkdir -p salt/file/base
saltmaster@saltmaster:~$ mkdir -p salt/pillar/base
saltmaster@saltmaster:~$ vi salt/file/base/top.sls
base:
'saltmaster':
- master
'saltminion*':
- minion
saltmaster@saltmaster:~$ vi salt/file/base/master.sls
touched:
cmd.run:
- name: touch /var/tmp/saltmaster
- creates:
- /var/tmp/saltmaster
saltmaster@saltmaster:~$ vi salt/file/base/minion.sls
touched:
cmd.run:
- name: touch /var/tmp/saltminion
- creates:
- /var/tmp/saltminion
In the top.sls file we define which minions are going to get which states applied. And next we define those states (master.sls, minion.sls). The only thing that we're actually going to do is touch a file, but the creates option tells Saltstack that it manages this file.
Applying the states is done as follows :
saltmaster@saltmaster:~$ sudo salt '*' state.highstate
saltmaster:
----------
ID: touched
Function: cmd.run
Name: touch /var/tmp/saltmaster
Result: True
Comment: Command "touch /var/tmp/saltmaster" run
Started: 14:47:44.531934
Duration: 7.843 ms
Changes:
----------
pid:
16845
retcode:
0
stderr:
stdout:
Summary for saltmaster
------------
Succeeded: 1 (changed=1)
Failed: 0
------------
Total states run: 1
Total run time: 7.843 ms
saltminion01:
----------
ID: touched
Function: cmd.run
Name: touch /var/tmp/saltminion
Result: True
Comment: Command "touch /var/tmp/saltminion" run
Started: 14:47:46.566812
Duration: 38.429 ms
Changes:
----------
pid:
28048
retcode:
0
stderr:
stdout:
Summary for saltminion01
------------
Succeeded: 1 (changed=1)
Failed: 0
------------
Total states run: 1
Total run time: 38.429 ms
And the result is :
saltmaster@saltmaster:~$ ll /var/tmp/saltmaster
-rw-r--r-- 1 root root 0 Dez 29 14:47 /var/tmp/saltmaster
[saltminion@saltminion01 ~]$ ll /var/tmp/saltminion
-rw-r--r--. 1 root root 0 Dec 29 14:47 /var/tmp/saltminion
Running highstate again shows that Saltstack now knows about the files, the commands are not executed again :
saltmaster@saltmaster:~$ sudo salt '*' state.highstate
saltmaster:
----------
ID: touched
Function: cmd.run
Name: touch /var/tmp/saltmaster
Result: True
Comment: All files in creates exist
Started: 14:50:56.020363
Duration: 0.35 ms
Changes:
Summary for saltmaster
------------
Succeeded: 1
Failed: 0
------------
Total states run: 1
Total run time: 0.350 ms
saltminion01:
----------
ID: touched
Function: cmd.run
Name: touch /var/tmp/saltminion
Result: True
Comment: All files in creates exist
Started: 14:50:57.934432
Duration: 4.667 ms
Changes:
Summary for saltminion01
------------
Succeeded: 1
Failed: 0
------------
Total states run: 1
Total run time: 4.667 ms
Having states defined in single files is nice, but they can also be made up of multiple files. A bit of reorganization is in order :
saltmaster@saltmaster:~$ mkdir salt/file/base/master
saltmaster@saltmaster:~$ mkdir salt/file/base/minion
saltmaster@saltmaster:~$ mv salt/file/base/master.sls base/master/init.sls
saltmaster@saltmaster:~$ mv salt/file/base/minion.sls base/minion/init.sls
Run a highstate again and you'll see nothing has changed, we've just made things a bit more manageable.
Time to get started on Tomcat. Variables first. As I said earlier Saltstack keeps those in pillar files and these work in exactly the same fashion as the state files :
saltmaster@saltmaster:~$ vi salt/pillar/base/top.sls
base:
'saltminion*':
- tomcat7
saltmaster@saltmaster:~$ mkdir salt/pillar/base/tomcat7
saltmaster@saltmaster:~$ echo -n "adminsecret" | gpg --armor --batch --trust-model always --encrypt -r 66D7B662
-----BEGIN PGP MESSAGE-----
<your PGP message will be here>
-----END PGP MESSAGE-----
saltmaster@saltmaster:~$ vi salt/pillar/base/tomcat7/init.sls
#!yaml|gpg
tomcat7_http_port: 8080
tomcat7_version: 7.0.73
tomcat7_admin_username: admin
tomcat7_admin_password: |
-----BEGIN PGP MESSAGE-----
<your PGP message here, note that the message is indented !!!>
-----END PGP MESSAGE-----
Having the variables available we can move on to the actual state files :
saltmaster@saltmaster:~$ mkdir salt/file/base/tomcat7
saltmaster@saltmaster:~$ vi salt/file/base/tomcat7/init.sls
tomcat7_initial:
# https://docs.saltstack.com/en/latest/ref/states/all/salt.states.group.html
group.present:
- name: tomcat
# https://docs.saltstack.com/en/latest/ref/states/all/salt.states.user.html
user.present:
- name: tomcat
- groups:
- tomcat
- require:
- group: tomcat
# https://docs.saltstack.com/en/latest/ref/states/all/salt.states.archive.html
archive.extracted:
- name: /opt
- source: http://archive.apache.org/dist/tomcat/tomcat-7/v{{ pillar['tomcat7_version'] }}/bin/apache-tomcat-{{ pillar['tomcat7_version'] }}.tar.gz
- source_hash: http://archive.apache.org/dist/tomcat/tomcat-7/v{{ pillar['tomcat7_version'] }}/bin/apache-tomcat-{{ pillar['tomcat7_version'] }}.tar.gz.md5
- user: tomcat
- group: tomcat
- enforce_ownership_on: /opt/apache-tomcat-{{ pillar['tomcat7_version'] }}
- require:
- user: tomcat
- group: tomcat
# https://docs.saltstack.com/en/latest/ref/states/all/salt.states.file.html
file.symlink:
- name: /usr/share/tomcat
- user: tomcat
- group: tomcat
- target: /opt/apache-tomcat-{{ pillar['tomcat7_version'] }}
- require:
- user: tomcat
- group: tomcat
tomcat7_tomcat-users:
# https://docs.saltstack.com/en/latest/ref/states/all/salt.states.file.html
file.managed:
- name: /opt/apache-tomcat-{{ pillar['tomcat7_version'] }}/conf/tomcat-users.xml
- source:
- salt://tomcat7/tomcat-users.xml
- user: tomcat
- group: tomcat
- mode: 0600
- template: jinja
- defaults:
tomcat7_admin_username: {{ pillar['tomcat7_admin_username'] }}
tomcat7_admin_password: {{ pillar['tomcat7_admin_password'] }}
- require:
- user: tomcat
- group: tomcat
tomcat7_server:
# https://docs.saltstack.com/en/latest/ref/states/all/salt.states.file.html
file.managed:
- name: /opt/apache-tomcat-{{ pillar['tomcat7_version'] }}/conf/server.xml
- source:
- salt://tomcat7/server.xml
- user: tomcat
- group: tomcat
- mode: 0600
- template: jinja
- defaults:
tomcat7_http_port: {{ pillar['tomcat7_http_port'] }}
- require:
- user: tomcat
- group: tomcat
tomcat7_service:
{% if grains['init'] == 'upstart' %}
# https://docs.saltstack.com/en/latest/ref/states/all/salt.states.file.html
file.managed:
- name: /etc/init/tomcat7.conf
- source:
- salt://tomcat7/tomcat7.conf
- user: root
- group: root
- mode: 0644
- template: jinja
- defaults:
tomcat7_version: {{ pillar['tomcat7_version'] }}
service.running:
- name: tomcat7
- enable: True
{% endif %}
Go through it ... you'll find exactly the same things that we put in the Ansible role. Everything is done with out-of-the-box salt-state-modules (you can write your own of course and Python would be the language to use for that).
And exactly as with the Ansible role we have three template files that live on saltmaster, are adapted with variables and then put on the minion :
/home/saltmaster/salt/file/base/tomcat7/tomcat-users.xml
/home/saltmaster/salt/file/base/tomcat7/server.xml
/home/saltmaster/salt/file/base/tomcat7/tomcat7.conf
They are exactly the same as for Ansible (with exception of the ansible_managed variable that obviously does not exist here), so I will not repeat the content.
Or wait ... there is actually one difference that has nothing to do with either Ansible or Saltstack but with the fact that our minion is running CentOS. In tomcat7.conf, you have to comment out the setuid and setgid lines and adapt the exec line as follows :
...
exec sudo -u tomcat $CATALINA_HOME/bin/catalina.sh run
...
The reason is that the upstart version on CentOS is quite old and does not know about setuid and setgid. It happens.
Before we can execute the tomcat7 state, we've got to assign it to our minion :
saltmaster@saltmaster:~$ vi salt/file/base/top.sls
base:
'saltmaster':
- master
'saltminion*':
- minion
- tomcat7
And here we go :
saltmaster@saltmaster:~$ sudo salt '*' state.highstate
As you'll see ... exactly the same as with Ansible. Cleaning up on a minion is the same too :
[saltminion@saltminion01 ~]$ sudo initctl stop tomcat7
tomcat7 stop/waiting
[saltminion@saltminion01 ~]$ sudo rm -rf /etc/init/tomcat7.conf
[saltminion@saltminion01 ~]$ sudo initctl reload-configuration
[saltminion@saltminion01 ~]$ sudo rm -rf /usr/share/tomcat
[saltminion@saltminion01 ~]$ sudo rm -rf /opt/apache-tomcat-7.0.73/
[saltminion@saltminion01 ~]$ sudo userdel tomcat
[saltminion@saltminion01 ~]$ sudo groupdel tomcat
Well, I hope I proved my point ... and my next post will not be about Tomcat.