Posts about DevOps
Build fully working zabbix server with database in seconds thanks to docker
To install zabbix server quickly zabbix comes with help as they have prebuild their product with docker images. There is lots of official zabbix images on dockerhub so it can just overwhelm you. There are mixes of all different possibilities like zabbix with mysql or postgres or either sqlite, zabbix served bt nginx or apache or java gateway. Depending on stack which is closest to you you can easily build docker-compose that will just run selected stack in seconds. My pick was nginx mysql so to set up fully running zabbix server we need 3 images
mysql-server
zabbix-web - web interface
zabbix-server - main zabbix process responsible for polling and trapping data and sending notifications to users.
In addition you can add postfix mail server for notifying users but its not a must as you can use your own mail server if so - just remove postfix service from example below.
Notice(you may want to use some specific versions or alpine versions for production env)
Create some directory (directory name is crucial here for visibility and future maitenance of your containers and volumes or networks as the name will be used as prefix for docker containers created by docker-compose and also volumes directories so it will be easier to identify in future which volume belongs to which stack In Ubuntu volumes are usually being kept in/var/lib/docker/volumes but you can mount any directory from host by just specifying absolute or relative path in service configuration so for instance for mysql in example to mount mysql_data_dir just outside of our containers folder
Now within directory create docker-compose.yml with selected technologies in my case it is: #docker-compose.yml
version: '3' services: db: image: mysql:latest restart: always expose: - '3336' environment: MYSQL_ROOT_PASSWORD: 'my_secret_password' MYSQL_USER: 'zabbixuser' MYSQL_PASSWORD: 'zabbix_password' MYSQL_ROOT_HOST: '%' volumes: - 'mysql_data_dir:/var/lib/mysql' zabbix-server: image: zabbix/zabbix-server-mysql links: - "db:mysql" - "postfix:postfix" environment: MYSQL_ROOT_PASSWORD: 'my_secret_password' MYSQL_USER: 'zabbixuser' MYSQL_PASSWORD: 'zabbixpassassword' DB_SERVER_HOST: 'mysql' zabbix-web: image: zabbix/zabbix-web-nginx-mysql ports: - '7777:80' links: - "db:mysql" - "zabbix-server:zabbix-server" - "postfix:postfix" environment: MYSQL_ROOT_PASSWORD: 'my_secret_password' MYSQL_USER: 'zabbixuser' MYSQL_PASSWORD: 'zabbixpassassword' DB_SERVER_HOST: 'mysql' ZBX_SERVER_HOST: "zabbix-server" PHP_TZ: "Europe/London" postfix: image: catatnight/postfix hostname: support environment: - maildomain=mydomain.com - smtp_user=admin:my_password ports: - "25:25" expose: - "25" volumes: - /etc/nginx/ssl/postfix:/etc/postfix/certs - /etc/nginx/ssl/postfix:/etc/opendkim/domainkeys volumes: mysql_data_dir: driver: local
The above solution is just enough to start zabbix server up and running in couple seconds. To do it just run: .. code-block:: bash
sudo docker-compose up
Thats it!!! You now have your zabbix running on port 7777
So what happened here docker-compose up has build and runned 3 containers by running zabbix container it discovered there are no tables in mysql and has built them.
Now you just need to add agents/servers you want to monitor. Check out adding agent in separate post
Versions: (versions I've used in this example Feb 2018):
Docker-compose: 1.17.0, build ac53b73 Docker: 17.09.1-ce, build 19e2cf6 Kernel: 4.13.0-36-generic System: Ubuntu 16.04.3 LTS
Adding zabbix agent to server
Zabbix is very powerfull tool which its using agents (or SNMP) to monitor server resources. Adding agent is easy but I had couple problems doing that when I used agent straight from my ubuntus (16.04.3) repo as there was no encryption functionality in this agent well I guess so as agent didn't recognize tls psk configuration so not very nice as by installing agent straight form repo with "sudo apt-get update && sudo apt-get install zabbix-agent" I had limited functionality and unencrypted server-agent traffic. So there are 2 options we can install zabbix agent or use zabbix agent docker container. Adding zabbix agent to host system. For current day 3.2 is the latest so please change latest accordingly of how this artcile is old. wget http://repo.zabbix.com/zabbix/3.2/ubuntu/pool/main/z/zabbix-release/zabbix-release_3.2-1+xenial_all.deb sudo dpkg -i zabbix-release_3.2-1+xenial_all.deb sudo apt-get update apt-get purge zabbix-agent #remove previous if installed apt-get install zabbix-agent
Now there are 3 basic options that need to be changed in agent config file: /etc/zabbix/zabbix_agentd.conf
Server=ip of zabbix server ServerActive=ip of zabbix server Hostname=My host name
sudo service zabbix-agent restart
Add host to server through web interface:
In server go to Configuration-> Hosts -> Create host type in host name visible name public IP address opf your agent.Select group and add agent. Next is to select templates so add services you need to monitor (here linux + mysql : Template DB MySQL, Template OS Linux) after saving you should see green ZBX available label on Hosts screen Notice : I couldnt see zbx agent green icon until I added linux template / or zabbix agent template.
Seurity - Setting up PSK encryption:
sh -c "openssl rand -hex 32 > /etc/zabbix/zabbix_agentd.psk" Now add below lines to /etc/zabbix/zabbix_agentd.conf TLSConnect=psk TLSAccept=psk #each identity id must be different for each serverr connected to one zabbix server TLSPSKIdentity=PSK SOMETHING TLSPSKFile=/etc/zabbix/zabbix_agentd.psk sudo service zabbix-agent restart Get generated key string: cat /etc/zabbix/zabbix_agentd.psk and add encryption in zabbix server web interface : In server go to Configuration-> Hosts -> my host->encryption
Select: Connections to host PSK connections from host PSK PSK identity: PSK SOMETHING (same as in zabbixagent config file) PSK: the hash generated (content of /etc/zabbix/zabbix_agentd.psk file on agent ) now there should be greebn psk lablel and all our traffice will be encrypted
Adding mysql monitoring option:
add user credentials for mysqlclient on agent server: mysql > grant all privileges on . to zabbix@'%' identified by 'zabbixuserpassword'; use localhost or host you will be accessing mysql from % is just for test purpose to eliminate authentication problems.
Out of the topic - something about mysql remote connections and security: My best practice is not to have any remote access like @'%' to mysql on any server I manage its just dangerous and anyone can try bruteforcing and try to connect to our mysql server. Another way I saw in many places if admin creates @'%' accesses they use it without any encryption so there is plain text traffic comming from mysql-server/postgres straight to users computer which is not good (MITM etc). The best would be to have your mysql server set up with ssl certificate but its not popular practice as may be time consuming for setting up and for connecting to such server (preatty easu in mysql-workbench). Faster way to encrypt your mysql confidential data traffic is to use ssh tunnel but there is a limitation here user that needs access to mysql data needs to have ssh access to the server if this is an option just define users with localhost as source like my_db_user@localhost this is safer as you cant guarantee mysql users competence so best practice is to prevent having '%', to double secure this method do not to expose 3306 to the public and only allow localhost(unix socket) and 127.0.0.1 (remember mysqlclient unixsocket/ ip connection) to be able to connect through this port. In dockerized mysql instances when I need it to be visible I just do ports config like 127.0.0.0:3306:3306 then it will be visible to host machine only. but if user wont have ssh access to the server then only option you have is using ssl cert. So remember having user@'%' or even user@'some_ip' you still without ssl or ssh the traffic from mysql-server is still unencrypted.
Ok comming back to mysql monitoring config: add client to my.cnf in /etc/mysql or to /etc/mysql/conf.d/mysql.cnf
[client] user = zabbix password = zabbixuserpassword port = 3326 host = 127.0.0.1
add myu.cnf
mkdir -p /var/lib/zabbix/ cd /var/lib/zabbix ln -sv /etc/mysql/my.cnf
service zabbix-agent restart
Now you can add mysql template items in zabbix server .
select linux templates to see agent visibility
bug in default userparameter_mysql agent file
cat /etc/zabbix/zabbix_agentd.d/userparameter_mysql.conf redirect error to stdout to grep later
UserParameter=mysql.ping,HOME=/var/lib/zabbix mysqladmin ping 2>&1 | grep -c alive previously was UserParameter=mysql.ping,HOME=/var/lib/zabbix mysqladmin ping | grep -c alive so grep didnt work
Write your post here.
fabric - auto deployment script
Recently I wrote fabric deployment scrip maybe someone will find it usefull.
It enables possibility to run "group execute" task with
or single host
All we need to do is to define group or sinle host as function afterwards I used end update decorator.
I know there could be also something like duplication of tasks with separate servers fab live1 pull live2 pull but I believe that fabric was written for distributed systems which has different paths of apps and users etc.
also roledefs with extra dict keys didn't work for me)? I want to keep this simple single/multiple host deployment commands like : fab live_servers pull, fab test pull
from fabric.api import run, env, local, get, cd from fabric.tasks import execute import inspect import sys import os import re from StringIO import StringIO # fabfile author: Grzegorz Stencel # usage: # run: fab help for examples # fab staging svnxapp:app=holdings_and_quotes,layout.py,permissions.py restart # fab test svnxlib SERVER_BRANCHES = { 'live': 'master', 'sit': 'sit', 'uat': 'uat', 'live2':'master', 'live3':'master' } # MAIN CONF SERVERS = { 'local': { 'envname': 'local', 'user': 'greg', 'host': 'localhost', 'host_string': 'localhost', 'path': os.environ.get('SITE_ROOT', '/opt/myapp/test'), 'www_root': 'http://localhost:8081/', 'retries_before_killing': 3, 'retry_sleep': 2 }, 'test': { 'envname': 'test', 'user': 'root', 'host': 'myapp-test.stencel.com', 'host_string': 'myapp-test.stencel.com', 'path': '/var/www/myapp/test/', 'www_root': 'http://myapp-test.stencel.com/', 'retries_before_killing': 3, 'retry_sleep': 2 }, 'uat': { 'envname': 'uat', 'user': 'myapp', 'host': 'uat.myapp2.stencel.com', 'host_string': 'uat.myapp2.stencel.com', 'key_filename': 'deploy/keys/id_rsa', 'path': '/opt/myapp/uat/', 'www_root': 'http://uat.myapp2.stencel.com/', 'retries_before_killing': 3, 'retry_sleep': 2 }, 'sit': { 'envname': 'sit', 'user': 'myapp', 'host': 'sit.myapp2.stencel.com', 'host_string': 'sit.myapp2.stencel.com', 'key_filename': 'deploy/keys/id_rsa', 'path': '/opt/myapp/sit/', 'www_root': 'http://sit.myapp2.stencel.com/', 'retries_before_killing': 3, 'retry_sleep': 2 }, 'live': { 'envname': 'live', 'user': 'myapp', 'host': '10.10.10.10', 'host_string': 'myapp2.stencel.com', 'path': '/opt/myapp/live/', 'www_root': 'http://myapp2.stencel.com/', 'retries_before_killing': 3, 'retry_sleep': 2 }, 'live2': { 'envname': 'live2', 'user': 'root', 'host': '10.10.10.11', 'host_string': 'live2.stencel.com', 'path': '/var/www/myapp/live/', 'www_root': 'http://myapp2.stencel.com/', 'retries_before_killing': 3, 'retry_sleep': 2 }, 'live3': { 'envname': 'live3', 'user': 'root', 'host': '10.10.10.12', 'host_string': 'live3.stencel.com', 'path': '/var/www/myapp/live/', 'www_root': 'http://myapp2.stencel.com/', 'retries_before_killing': 3, 'retry_sleep': 2 }, } LIVE_HOSTS = ['live', 'live2', 'live3'] def list_hosts(): """ Lists available myapp hosts """ print " Single hosts(if you want to pull from svn only to one of them):" print ' %s' % '\n '.join([a for a in SERVERS]) print " Multiple hosts" print ' live (which contains %s)' % ','.join([a for a in LIVE_HOSTS]) def test(): """ single host definition , "fab test restart" wil restart this one host """ env.update(dict(SERVERS['test'])) def localhost(): """ single host definition , "fab test restart" wil restart this one host """ env.update(dict(SERVERS['local'])) def uat(): """ single host definition , "fab uat restart" wil restart this single host """ env.update(dict(SERVERS['uat'])) def sit(): """ single host """ env.update(dict(SERVERS['sit'])) # SERVERS GRcompanyS DEFINITION def live(): """ multiple grcompany of hosts - running: "fab live restart" will restart all live servers """ env['hosts'] = [SERVERS[a]['host'] for a in LIVE_HOSTS] # env.update(dict(SERVERS['staging'])) def env_update(func): """ Decorator - needs to be added to each task in fabricfile - for multiple host task execution """ def func_wrapper(*args, **kwargs): if not len(env.hosts): return func(*args, **kwargs) else: env.update(dict(SERVERS[filter(lambda x: SERVERS[x]['host'] == env.host, MyApp_SERVERS)[0]])) func(*args, **kwargs) return func_wrapper @env_update def bundle_media(): """ bundles media like css and js to one file. example: fab test bundle_media """ # export DJANGO_SETTINGS_MODULE=settings #run("cd {0} && source settings/{1}-config.sh && python scripts/bundle_media.py".format(env.path,env.envname)) run("source /usr/share/virtualenvwrapper/virtualenvwrapper.sh && workon {0} && python scripts/bundle_media.py".format("%s-myapp" % env.envname if env.envname<> 'live' else 'MyApp-test')) #change live venv to be live-MyApp def _valid_branch(env): branch = run("cd {0} && git rev-parse --abbrev-ref HEAD".format(env.path)) return branch == SERVER_BRANCHES[env.envname] and not env.envname=='local' @env_update def pull(*args, **kwargs): if _valid_branch(env): with cd(env.path): run("git fetch origin") run("git reset --hard origin/%s" % branch) else: print "Error : Server is checked out to wrong branch!!!" #run('git fetch --quiet') #run('git fetch --tags --quiet') @env_update def reload(): """ Reload specified servers - kills unused gunicorn workers but waits workers with old code to finish processing. """ bundle_media() #if env.envname in ('uat', 'staging', 'live'): f = StringIO() get("/opt/myapp/%s/pid" % env.envname,f) pid = re.search(r'\d+',f.getvalue()).group() run("ps aux | grep gunicorn | grep %s | grep master | grep -v grep | awk '{print $2}'" % env.envname) run("kill -HUP %s" % pid) @env_update def restart(): """ Hard restarts specified servers """ bundle_media() run("ps aux | grep gunicorn | grep %s | grep master | grep -v grep | awk '{print $2}'" % env.envname) run("supervisorctl stop myapp-%s && supervisorctl start MyApp-%s" % (env.envname,env.envname)) run("ps aux | grep gunicorn | grep %s | grep master | grep -v grep | awk '{print $2}'" % env.envname) def help(): fabric_functions = ['run', 'execute', 'local', 'func_wrapper'] functions = set([obj.__name__ if obj.__name__ not in fabric_functions else '' for name, obj in inspect.getmembers(sys.modules[__name__]) if inspect.isfunction(obj)]) functions.remove('') print "usage: \n fab [host/grcompany of hosts] [commands] (optional command with arguments command:kwarg=val,arg1,arg2,arg3)" print "\navailable servers:" list_hosts() print "\ncommands:\n %s" % ', '.join([a for a in functions]) print "\nexamples:\n staging svnxapp:app=holdings_and_quotes,layout.py,permissions.py restart" print " fab test restart" print " fab staging svnxapp:app=holdings_and_quotes,lib/quote.py,layout.py,models.py" print " fab staging svnxapp:app=holdings_and_quotes,lib/quote.py restart" print " fab test build" print " fab test bundle_media restart" print " For svnx whole app (comma in the end):" print " fab test svnxapp:app=medrep," print " For global lib:" print " fab test svnxlib" print " For whole global media:" print " fab test svnxmedia:" print " For global media file:" print " fab test svnxmedia:javascript" print " fab test svnxmedia:javascript/company/checklist.js" print "\nIf .js file in args like : fab staging svnxapp:app=holdings_and_quotes,media/js/quote.js,layout.py,models.py" print "It will bundle media itself" print "Restart test staging without params:\n fab restart" for f in functions: print f print globals()[f].__doc__ print "\n" @env_update def accessguni(): run("tail /var/log/myapp/access-%s.log" % env.envname.upper() ) @env_update def accessgunilive(): run("tail -f /var/log/myapp/access-%s.log" % env.envname.upper() ) @env_update def errorguni(): run("tail /var/log/myapp/error-%s.log" % env.envname.upper() ) @env_update def errorgunilive(): run("tail -f /var/log/myapp/error.log" % env.envname.upper() ) def hostname(): run('uname -a') @env_update def uptime(): run('uptime')
Fabric
fabric execution::::: fab -H me@host1,me@host2,me@host3 function
Example: fab -H greg@mmyserver.com get_backupor alternatively:
Example: fab production1 deploy but then you'll have to production1 defined inside your fabfile.pydef production(): env.update(dict( dest='production', hosts=['some_ip_address'], )) def development(): env.update(dict( dest='development', hosts=['localhost'], ))
local - execute a local command means host from which we launch fabric
run - execute a remote command on all specified hosts, user-level permissions
sudo - sudo a command on the remote server)
put - copy over a local file to a remote destination)
get - download a file from the remote server)
prompt - prompt user with text and return the input (like raw_input))
reboot - reboot the remote system, disconnect, and wait for wait seconds)
Download some logs
get(remote_path="/tmp/log_extracts.tar.gz", local_path="/logs/new_log.tar.gz")