Skip to main content

Mysql on kubernetes with persistent volume and secrets

Volumes

Persistent storage with NFS

In this example I have created nfs share "PersistentVolume" on my qnap NAS which IP is 192.168.1.11 Create persistentVolume.yml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv0001
spec:
  capacity:
    storage: 100Gi
  accessModes:
  - ReadWriteMany
  mountOptions:
    - nfsvers=4.1
  nfs:
    path: /PersistentVolume/pv0001
    server: 192.168.1.11
  persistentVolumeReclaimPolicy: Retain

Create persistentVolumeClaim.yml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-pv-claim
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi

Secrets

The configuration of your containers should be stored in separate place to guarantee mobility (it shouldnt be hardcoded ) neither should it be stored in database The best approach is to store your configuration in environment variables for docker for instance you can store it in env files which are gitignored or env vars which you need to set during container startup. In kubernetes you have option to store all configuration like usernames, passwords, API urls etc in configmaps and secrets. Passwords shouldnt be stored in configmaps though as it is stored there in plain text.So the best choice for passwords is secrets which stores data in base64.

Create password and user and db name and encode it with base64

echo -n "MyPassword" | base64 #TXlQYXNzd29yZA==
echo -n "django" | base64  # ZGphbmdv
echo -n "kubernetes_test" | base64 # a3ViZXJuZXRlc190ZXN0

Apply above results to secret.yml

---
apiVersion: v1
kind: Secret
metadata:
  name: mysql-secrets
type: Opaque
data:
  MYSQL_ROOT_PASSWORD: TXlQYXNzd29yZA==
  MYSQL_USER: ZGphbmdv
  MYSQL_PASSWORD: ZGphbmdv
  MYSQL_DATABASE: a3ViZXJuZXRlc190ZXN0

On your cluster create secrets.yml

kubectl create -f secrets.yml

Mysql application

Now having persistent volumeclain and secrets we can write mysql deployment file

deployment.yml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: mysql-deployment
  labels:
    app: mysql
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mysql
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
        - name: mysql
          image: mysql:5.7
          ports:
            - containerPort: 3306
          volumeMounts:
            - mountPath: "/var/lib/mysql"
              subPath: "mysql"
              name: mysql-data
          env:
            - name: MYSQL_ROOT_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: mysql-secrets
                  key: MYSQL_ROOT_PASSWORD
            - name: MYSQL_USER
              valueFrom:
                secretKeyRef:
                  name: mysql-secrets
                  key: MYSQL_USER
            - name: MYSQL_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: mysql-secrets
                  key: MYSQL_PASSWORD
            - name: MYSQL_DATABASE
              valueFrom:
                secretKeyRef:
                  name: mysql-secrets
                  key: MYSQL_DATABASE
      volumes:
        - name: mysql-data
          persistentVolumeClaim:
            claimName: mysql-pv-claim
kubectl apply -f deployment.yml

Checking

Now we can check if our deployment was successful:

kubectl get deployments

NAME               READY   UP-TO-DATE   AVAILABLE   AGE
mysql-deployment   1/1     1            1           66m

If somethings wrong you can always investigate with describe or logs

kubectl describe deployment mysql-deployment

Name:                   mysql-deployment
Namespace:              default
CreationTimestamp:      Sun, 28 Jun 2020 17:02:00 +0000
Labels:                 app=mysql
Annotations:            deployment.kubernetes.io/revision: 1
Selector:               app=mysql
Replicas:               1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
Labels:  app=mysql
Containers:
mysql:
    Image:      mysql:5.7
    Port:       3306/TCP
    Host Port:  0/TCP
    Environment:
    MYSQL_ROOT_PASSWORD:  <set to the key 'MYSQL_ROOT_PASSWORD' in secret 'mysql-secrets'>  Optional: false
    MYSQL_USER:           <set to the key 'MYSQL_USER' in secret 'mysql-secrets'>           Optional: false
    MYSQL_PASSWORD:       <set to the key 'MYSQL_PASSWORD' in secret 'mysql-secrets'>       Optional: false
    MYSQL_DATABASE:       <set to the key 'MYSQL_DATABASE' in secret 'mysql-secrets'>       Optional: false
    Mounts:
    /var/lib/mysql from mysql-data (rw,path="mysql")
Volumes:
mysql-data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  mysql-pv-claim
    ReadOnly:   false
Conditions:
Type           Status  Reason
----           ------  ------
Available      True    MinimumReplicasAvailable
Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   mysql-deployment-579b8bb767 (1/1 replicas created)
Events:          <none>

Or investigate pods

kubectl get pods

NAME                                READY   STATUS    RESTARTS   AGE
mysql-deployment-579b8bb767-mk5jx   1/1     Running   0          69m

kubectl describe pod mysql-deployment-579b8bb767-mk5jx

Name:         mysql-deployment-579b8bb767-mk5jx
Namespace:    default
Priority:     0
Node:         worker4/192.168.50.15
Start Time:   Sun, 28 Jun 2020 17:02:00 +0000
Labels:       app=mysql
            pod-template-hash=579b8bb767
Annotations:  cni.projectcalico.org/podIP: 192.168.199.131/32
Status:       Running
IP:           192.168.199.131
IPs:
IP:           192.168.199.131
Controlled By:  ReplicaSet/mysql-deployment-579b8bb767
Containers:
mysql:
    Container ID:   docker://b755c731e9b72812040d62315a2499d05cdaa6b8425e6b357fa19f1e9d6aed2c
    Image:          mysql:5.7
    Image ID:       docker-pullable://mysql@sha256:32f9d9a069f7a735e28fd44ea944d53c61f990ba71460c5c183e610854ca4854
    Port:           3306/TCP
    Host Port:      0/TCP
    State:          Running
    Started:      Sun, 28 Jun 2020 17:02:02 +0000
    Ready:          True
    Restart Count:  0
    Environment:
    MYSQL_ROOT_PASSWORD:  <set to the key 'MYSQL_ROOT_PASSWORD' in secret 'mysql-secrets'>  Optional: false
    MYSQL_USER:           <set to the key 'MYSQL_USER' in secret 'mysql-secrets'>           Optional: false
    MYSQL_PASSWORD:       <set to the key 'MYSQL_PASSWORD' in secret 'mysql-secrets'>       Optional: false
    MYSQL_DATABASE:       <set to the key 'MYSQL_DATABASE' in secret 'mysql-secrets'>       Optional: false
    Mounts:
    /var/lib/mysql from mysql-data (rw,path="mysql")
    /var/run/secrets/kubernetes.io/serviceaccount from default-token-4wtnw (ro)
Conditions:
Type              Status
Initialized       True
Ready             True
ContainersReady   True
PodScheduled      True
Volumes:
mysql-data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  mysql-pv-claim
    ReadOnly:   false
default-token-4wtnw:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-4wtnw
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                node.kubernetes.io/unreachable:NoExecute for 300s
Events:          <none>

Or logs from pod

kubectl logs mysql-deployment-579b8bb767-mk5jx

2020-06-28T17:02:13.695295Z 0 [Note] IPv6 is available.
2020-06-28T17:02:13.695350Z 0 [Note]   - '::' resolves to '::';
2020-06-28T17:02:13.695392Z 0 [Note] Server socket created on IP: '::'.
2020-06-28T17:02:13.695906Z 0 [Warning] Insecure configuration for --pid-file: Location '/var/run/mysqld' in the path is accessible to all OS users. Consider choosing a different directory.
2020-06-28T17:02:13.703856Z 0 [Note] InnoDB: Buffer pool(s) load completed at 200628 17:02:13
2020-06-28T17:02:13.746239Z 0 [Note] Event Scheduler: Loaded 0 events
2020-06-28T17:02:13.746461Z 0 [Note] mysqld: ready for connections.
Version: '5.7.30'  socket: '/var/run/mysqld/mysqld.sock'  port: 3306  MySQL Community Server (GPL)

Where we can see our mysql server is up and running

We can now test if our secrets were applied by running exact same exec syntax as in docker NEVER PROVIDE PASSWORD IN COMMAND LINE THIS IS JUST FOR DEMONSTRATION PURPOSES if you do just -p you will be prompted for password

kubectl exec -it mysql-deployment-579b8bb767-mk5jx -- mysql -u root -pMyPassword

mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| kubernetes_test    |
| mysql              |
| performance_schema |
| sys                |
+--------------------+
5 rows in set (0.02 sec)

We can see initial db kubernetes_test was created also lets try to log in to it with user and pass set up

kubectl exec -it mysql-deployment-579b8bb767-mk5jx -- mysql -u django -pdjango kubernetes_test

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql>

Everything works as expected!!

Kuberenetes NFS persistent volume

k8s_nfs_persistent_volume

Create nfs persistent volume:

What you need

  • NFS Server I have used NFS already installed on my QNAP NAS (You need to enable NO_ROOT_SQUASH on permissions)

  • K8s cluster

Now having your NFS share here 192.168.1.11/Persistentvolume/ you can try if it works with mount

sudo mount -t nfs 192.168.1.11:/PersistentVolume /mnt/PersistentVolume

Later on you can secure access with password.

If everything works fine we need persistent volume on our cluster

persistentvolume.yml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv0001
spec:
  capacity:
    storage: 100Gi
  accessModes:
  - ReadWriteMany
  mountOptions:
    - nfsvers=4.1
  nfs:
    path: /PersistentVolume/pv0001
    server: 192.168.1.11
  persistentVolumeReclaimPolicy: Retain

Apply above yaml to the cluster

kubectl apply -f persistentvolume.yml

Now we need to declare persistent volume claim

persistentvolumeclaim.yml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-pv-claim
spec:
  accessModes:
    - ReadWriteMany 
  resources:
    requests:
      storage: 10Gi

Apply

kubectl apply -f persistentvolumeclaim.yml

Check if it has been bound:

kubectl get pv

NAME     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                    STORAGECLASS   REASON   AGE
pv0001   100Gi      RWX            Retain           Bound    default/mysql-pv-claim                           2d4h

kubectl get pvc 
NAME              STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
mysql-pv-claim    Bound     pv0001   100Gi      RWX                           2d4h

Build fully working zabbix server with database in seconds thanks to docker

To install zabbix server quickly zabbix comes with help as they have prebuild their product with docker images. There is lots of official zabbix images on dockerhub so it can just overwhelm you. There are mixes of all different possibilities like zabbix with mysql or postgres or either sqlite, zabbix served bt nginx or apache or java gateway. Depending on stack which is closest to you you can easily build docker-compose that will just run selected stack in seconds. My pick was nginx mysql so to set up fully running zabbix server we need 3 images

  • mysql-server

  • zabbix-web - web interface

  • zabbix-server - main zabbix process responsible for polling and trapping data and sending notifications to users.

In addition you can add postfix mail server for notifying users but its not a must as you can use your own mail server if so - just remove postfix service from example below.

Notice(you may want to use some specific versions or alpine versions for production env)

Create some directory (directory name is crucial here for visibility and future maitenance of your containers and volumes or networks as the name will be used as prefix for docker containers created by docker-compose and also volumes directories so it will be easier to identify in future which volume belongs to which stack In Ubuntu volumes are usually being kept in/var/lib/docker/volumes but you can mount any directory from host by just specifying absolute or relative path in service configuration so for instance for mysql in example to mount mysql_data_dir just outside of our containers folder

volumes:
  - '../mysql_data_dir:/var/lib/mysql'

Now within directory create docker-compose.yml with selected technologies in my case it is: #docker-compose.yml

version: '3'

services:
  db:
    image: mysql:latest
    restart: always
    expose:
      - '3336'
    environment:
      MYSQL_ROOT_PASSWORD: 'my_secret_password'
      MYSQL_USER: 'zabbixuser'
      MYSQL_PASSWORD: 'zabbix_password'
      MYSQL_ROOT_HOST: '%'
    volumes:
      - 'mysql_data_dir:/var/lib/mysql'


  zabbix-server:
    image: zabbix/zabbix-server-mysql
    links:
      - "db:mysql"
      - "postfix:postfix"
    environment:
      MYSQL_ROOT_PASSWORD: 'my_secret_password'
      MYSQL_USER: 'zabbixuser'
      MYSQL_PASSWORD: 'zabbixpassassword'
      DB_SERVER_HOST: 'mysql'


  zabbix-web:
    image: zabbix/zabbix-web-nginx-mysql
    ports:
      - '7777:80'
    links:
      - "db:mysql"
      - "zabbix-server:zabbix-server"
      - "postfix:postfix"
    environment:
      MYSQL_ROOT_PASSWORD: 'my_secret_password'
      MYSQL_USER: 'zabbixuser'
      MYSQL_PASSWORD: 'zabbixpassassword'
      DB_SERVER_HOST: 'mysql'
      ZBX_SERVER_HOST: "zabbix-server"
      PHP_TZ: "Europe/London"
  postfix:
    image: catatnight/postfix
    hostname: support
    environment:
      - maildomain=mydomain.com
      - smtp_user=admin:my_password
    ports:
      - "25:25"
    expose:
      - "25"
    volumes:
      - /etc/nginx/ssl/postfix:/etc/postfix/certs
      - /etc/nginx/ssl/postfix:/etc/opendkim/domainkeys
volumes:
  mysql_data_dir:
    driver: local

The above solution is just enough to start zabbix server up and running in couple seconds. To do it just run: .. code-block:: bash

sudo docker-compose up

Thats it!!! You now have your zabbix running on port 7777

So what happened here docker-compose up has build and runned 3 containers by running zabbix container it discovered there are no tables in mysql and has built them.

Now you just need to add agents/servers you want to monitor. Check out adding agent in separate post

Versions: (versions I've used in this example Feb 2018):

Docker-compose: 1.17.0, build ac53b73 Docker: 17.09.1-ce, build 19e2cf6 Kernel: 4.13.0-36-generic System: Ubuntu 16.04.3 LTS

Adding zabbix agent to server

Zabbix is very powerfull tool which its using agents (or SNMP) to monitor server resources. Adding agent is easy but I had couple problems doing that when I used agent straight from my ubuntus (16.04.3) repo as there was no encryption functionality in this agent well I guess so as agent didn't recognize tls psk configuration so not very nice as by installing agent straight form repo with "sudo apt-get update && sudo apt-get install zabbix-agent" I had limited functionality and unencrypted server-agent traffic. So there are 2 options we can install zabbix agent or use zabbix agent docker container. Adding zabbix agent to host system. For current day 3.2 is the latest so please change latest accordingly of how this artcile is old. wget http://repo.zabbix.com/zabbix/3.2/ubuntu/pool/main/z/zabbix-release/zabbix-release_3.2-1+xenial_all.deb sudo dpkg -i zabbix-release_3.2-1+xenial_all.deb sudo apt-get update apt-get purge zabbix-agent #remove previous if installed apt-get install zabbix-agent

Now there are 3 basic options that need to be changed in agent config file: /etc/zabbix/zabbix_agentd.conf

Server=ip of zabbix server ServerActive=ip of zabbix server Hostname=My host name

sudo service zabbix-agent restart

Add host to server through web interface:

In server go to Configuration-> Hosts -> Create host type in host name visible name public IP address opf your agent.Select group and add agent. Next is to select templates so add services you need to monitor (here linux + mysql : Template DB MySQL, Template OS Linux) after saving you should see green ZBX available label on Hosts screen Notice : I couldnt see zbx agent green icon until I added linux template / or zabbix agent template.

Seurity - Setting up PSK encryption:

sh -c "openssl rand -hex 32 > /etc/zabbix/zabbix_agentd.psk" Now add below lines to /etc/zabbix/zabbix_agentd.conf TLSConnect=psk TLSAccept=psk #each identity id must be different for each serverr connected to one zabbix server TLSPSKIdentity=PSK SOMETHING TLSPSKFile=/etc/zabbix/zabbix_agentd.psk sudo service zabbix-agent restart Get generated key string: cat /etc/zabbix/zabbix_agentd.psk and add encryption in zabbix server web interface : In server go to Configuration-> Hosts -> my host->encryption

Select: Connections to host PSK connections from host PSK PSK identity: PSK SOMETHING (same as in zabbixagent config file) PSK: the hash generated (content of /etc/zabbix/zabbix_agentd.psk file on agent ) now there should be greebn psk lablel and all our traffice will be encrypted

Adding mysql monitoring option:

add user credentials for mysqlclient on agent server: mysql > grant all privileges on . to zabbix@'%' identified by 'zabbixuserpassword'; use localhost or host you will be accessing mysql from % is just for test purpose to eliminate authentication problems.

Out of the topic - something about mysql remote connections and security: My best practice is not to have any remote access like @'%' to mysql on any server I manage its just dangerous and anyone can try bruteforcing and try to connect to our mysql server. Another way I saw in many places if admin creates @'%' accesses they use it without any encryption so there is plain text traffic comming from mysql-server/postgres straight to users computer which is not good (MITM etc). The best would be to have your mysql server set up with ssl certificate but its not popular practice as may be time consuming for setting up and for connecting to such server (preatty easu in mysql-workbench). Faster way to encrypt your mysql confidential data traffic is to use ssh tunnel but there is a limitation here user that needs access to mysql data needs to have ssh access to the server if this is an option just define users with localhost as source like my_db_user@localhost this is safer as you cant guarantee mysql users competence so best practice is to prevent having '%', to double secure this method do not to expose 3306 to the public and only allow localhost(unix socket) and 127.0.0.1 (remember mysqlclient unixsocket/ ip connection) to be able to connect through this port. In dockerized mysql instances when I need it to be visible I just do ports config like 127.0.0.0:3306:3306 then it will be visible to host machine only. but if user wont have ssh access to the server then only option you have is using ssl cert. So remember having user@'%' or even user@'some_ip' you still without ssl or ssh the traffic from mysql-server is still unencrypted.

Ok comming back to mysql monitoring config: add client to my.cnf in /etc/mysql or to /etc/mysql/conf.d/mysql.cnf

[client] user = zabbix password = zabbixuserpassword port = 3326 host = 127.0.0.1

add myu.cnf

mkdir -p /var/lib/zabbix/ cd /var/lib/zabbix ln -sv /etc/mysql/my.cnf

service zabbix-agent restart

Now you can add mysql template items in zabbix server .

select linux templates to see agent visibility

bug in default userparameter_mysql agent file

cat /etc/zabbix/zabbix_agentd.d/userparameter_mysql.conf redirect error to stdout to grep later

UserParameter=mysql.ping,HOME=/var/lib/zabbix mysqladmin ping 2>&1 | grep -c alive previously was UserParameter=mysql.ping,HOME=/var/lib/zabbix mysqladmin ping | grep -c alive so grep didnt work

Write your post here.

Zabbix stack with docker-compose.yml

Fully working zabbix server solution with UI and database in seconds

I wanted to install zabbix server quickly with docker but amount of zabbix images (created by zabbix) on dockerhub just overwhelmed me. To set up running zabbix server we need 3 images * choice of sql DB * zabbix-web - web interface * zabbix-server - main zabbix process responsible for polling and trapping data and sending notifications to users.

My choice of database was MySQL so I created docker-compose file to have full stack of running zabbix server:

Notice(you may want to use alpine versions for production env) docker-compose.yml:

version: '3'

services:
  db:
    image: mysql:latest
    restart: always
    expose:
      - '3336'
    environment:
      MYSQL_ROOT_PASSWORD: 'my_secret_password'
      MYSQL_USER: 'zabbixuser'
      MYSQL_PASSWORD: 'zabbixpass'
      MYSQL_ROOT_HOST: '%'
    volumes:
      - 'mysql_data_dir:/var/lib/mysql'


  zabbix-server:
    image: zabbix/zabbix-server-mysql
    links:
      - "db:mysql"
      - "postfix:postfix"
    environment:
      MYSQL_ROOT_PASSWORD: 'my_secret_password'
      MYSQL_USER: 'zabbixuser'
      MYSQL_PASSWORD: 'zabbixpass'
      DB_SERVER_HOST: 'mysql'


  zabbix-web:
    image: zabbix/zabbix-web-nginx-mysql
    ports:
      - '7777:80'
    links:
      - "db:mysql"
      - "zabbix-server:zabbix-server"
      - "postfix:postfix"
    environment:
      MYSQL_ROOT_PASSWORD: 'secret'
      MYSQL_USER: 'zabbixuser'
      MYSQL_PASSWORD: 'myzabbixpass'
      DB_SERVER_HOST: 'mysql'
      ZBX_SERVER_HOST: "zabbix-server"
      PHP_TZ: "Europe/London"
  postfix:
    image: catatnight/postfix
    hostname: support
    environment:
      - maildomain=domain.com
      - smtp_user=admin:password
    ports:
      - "25:25"
    #  - "465:465"
    #  - "587:587"
    expose:
      - "25"
    #  - "465"
    #  - "587"
    volumes:
      - /etc/nginx/ssl/postfix:/etc/postfix/certs
      - /etc/nginx/ssl/postfix:/etc/opendkim/domainkeys
volumes:
  mysql_data_dir:
    driver: local

      #- ./deployment/config_files/main-postfix-live.cf:/etc/postfix/main.cf
    #networks:
    #  - backend
    #entrypoint: /docker-entrypoint.sh

The above solution is just enough to start zabbix server up and running in couple seconds. To run it just put yml file into some directory (directory is important as volume created for mysql will have this dir name as prefix) volumes are usually stored in /var/lib/docker/volumes and run:

sudo docker-compose up

Thats it!!! You now have your zabbix running on port 7777

So what happened here docker-compose up has build and runned 3 containers by running zabbix container it discovered there are no tables in mysql and has built them.

Now you just need to add agents/servers you want to monitor. Check out adding agent in separate post [here]

Versions: (versions I've used in this example Feb 2018):

Docker-compose: 1.17.0, build ac53b73 Docker: 17.09.1-ce, build 19e2cf6 Kernel: 4.13.0-36-generic

GIT commants I've found useful

Check files changed between branches

git diff --name-status master..devel

Check changes on file form different branch/commit

git diff commit_hash -- filename

Same as above between 2 branches/commits

git diff commit_hash master -- filename

Check full file history

git log -p -- filename

Check who broke production server :

git blame filename

Merge as one commit (need to commit afterwards) its not default like in normal merge:

git merge --squash branch

List of commits in git local storage

git reflog

Take(checkout) file from different branch/commit

git checkout develop -- filename
git checkout commit_hash -- filename

Reset current branch to remote:

git reset --hard origin/current_branch
git reset --hard origin/master

Save and depracate changes which were not commited

git stash
git stash save -a

Restore stash (by picking selected)

git stash list
git stash pop {0}

fabric - auto deployment script

Recently I wrote fabric deployment scrip maybe someone will find it usefull.

It enables possibility to run "group execute" task with

fab live_servers pull restart

or single host

fab live1 pull

All we need to do is to define group or sinle host as function afterwards I used end update decorator.

I know there could be also something like duplication of tasks with separate servers fab live1 pull live2 pull but I believe that fabric was written for distributed systems which has different paths of apps and users etc.

also roledefs with extra dict keys didn't work for me)? I want to keep this simple single/multiple host deployment commands like : fab live_servers pull, fab test pull

from fabric.api import run, env, local, get, cd
from fabric.tasks import execute
import inspect
import sys
import os
import re
from StringIO import StringIO

# fabfile author: Grzegorz Stencel
# usage:
# run: fab help for examples
# fab staging svnxapp:app=holdings_and_quotes,layout.py,permissions.py restart
# fab test svnxlib

SERVER_BRANCHES = {
    'live': 'master',
    'sit': 'sit',
    'uat': 'uat',
    'live2':'master',
    'live3':'master'

}
# MAIN CONF
SERVERS = {
    'local': {
        'envname': 'local',
        'user': 'greg',
        'host': 'localhost',
        'host_string': 'localhost',
        'path': os.environ.get('SITE_ROOT', '/opt/myapp/test'),
        'www_root': 'http://localhost:8081/',
        'retries_before_killing': 3,
        'retry_sleep': 2
    },
    'test': {
        'envname': 'test',
        'user': 'root',
        'host': 'myapp-test.stencel.com',
        'host_string': 'myapp-test.stencel.com',
        'path': '/var/www/myapp/test/',
        'www_root': 'http://myapp-test.stencel.com/',
        'retries_before_killing': 3,
        'retry_sleep': 2
    },
    'uat': {
        'envname': 'uat',
        'user': 'myapp',
        'host': 'uat.myapp2.stencel.com',
        'host_string': 'uat.myapp2.stencel.com',
        'key_filename': 'deploy/keys/id_rsa',
        'path': '/opt/myapp/uat/',
        'www_root': 'http://uat.myapp2.stencel.com/',
        'retries_before_killing': 3,
        'retry_sleep': 2
    },
    'sit': {
        'envname': 'sit',
        'user': 'myapp',
        'host': 'sit.myapp2.stencel.com',
        'host_string': 'sit.myapp2.stencel.com',
        'key_filename': 'deploy/keys/id_rsa',
        'path': '/opt/myapp/sit/',
        'www_root': 'http://sit.myapp2.stencel.com/',
        'retries_before_killing': 3,
        'retry_sleep': 2
    },
    'live': {
        'envname': 'live',
        'user': 'myapp',
        'host': '10.10.10.10',
        'host_string': 'myapp2.stencel.com',
        'path': '/opt/myapp/live/',
        'www_root': 'http://myapp2.stencel.com/',
        'retries_before_killing': 3,
        'retry_sleep': 2
    },
    'live2': {
        'envname': 'live2',
        'user': 'root',
        'host': '10.10.10.11',
        'host_string': 'live2.stencel.com',
        'path': '/var/www/myapp/live/',
        'www_root': 'http://myapp2.stencel.com/',
        'retries_before_killing': 3,
        'retry_sleep': 2
    },
    'live3': {
        'envname': 'live3',
        'user': 'root',
        'host': '10.10.10.12',
        'host_string': 'live3.stencel.com',
        'path': '/var/www/myapp/live/',
        'www_root': 'http://myapp2.stencel.com/',
        'retries_before_killing': 3,
        'retry_sleep': 2
    },

}

LIVE_HOSTS = ['live', 'live2', 'live3']


def list_hosts():
    """
    Lists available myapp hosts
    """
    print " Single hosts(if you want to pull from svn only to one of them):"
    print '   %s' % '\n   '.join([a for a in SERVERS])
    print " Multiple hosts"
    print '   live (which contains %s)' % ','.join([a for a in LIVE_HOSTS])


def test():
    """
    single host definition , "fab test restart" wil restart this one host

    """
    env.update(dict(SERVERS['test']))


def localhost():
    """
    single host definition , "fab test restart" wil restart this one host

    """
    env.update(dict(SERVERS['local']))


def uat():
    """
    single host definition , "fab uat restart" wil restart this single host

    """
    env.update(dict(SERVERS['uat']))


def sit():
    """
    single host

    """
    env.update(dict(SERVERS['sit']))


#  SERVERS GRcompanyS DEFINITION
def live():
    """
    multiple grcompany of hosts - running: "fab live restart" will restart all live servers

    """
    env['hosts'] = [SERVERS[a]['host'] for a in LIVE_HOSTS]

    # env.update(dict(SERVERS['staging']))


def env_update(func):
    """
    Decorator - needs to be added to each task in fabricfile - for multiple host task execution
    """

    def func_wrapper(*args, **kwargs):
        if not len(env.hosts):
            return func(*args, **kwargs)
        else:
            env.update(dict(SERVERS[filter(lambda x: SERVERS[x]['host'] == env.host, MyApp_SERVERS)[0]]))
            func(*args, **kwargs)

    return func_wrapper


@env_update
def bundle_media():
    """
    bundles media like css and js to one file.
    example:
        fab test bundle_media
    """
    # export DJANGO_SETTINGS_MODULE=settings
    #run("cd {0} && source settings/{1}-config.sh && python scripts/bundle_media.py".format(env.path,env.envname))
   run("source /usr/share/virtualenvwrapper/virtualenvwrapper.sh && workon {0} && python scripts/bundle_media.py".format("%s-myapp" % env.envname if env.envname<> 'live' else 'MyApp-test')) #change live venv to be live-MyApp

def _valid_branch(env):
    branch = run("cd {0} && git rev-parse --abbrev-ref HEAD".format(env.path))
    return branch == SERVER_BRANCHES[env.envname] and not env.envname=='local'


@env_update
def pull(*args, **kwargs):
    if _valid_branch(env):
        with cd(env.path):
            run("git fetch origin")
            run("git reset --hard origin/%s" % branch)
    else:
        print "Error : Server is checked out to wrong branch!!!"


            #run('git fetch --quiet')
            #run('git fetch --tags --quiet')

@env_update
def reload():
    """
    Reload specified servers - kills unused gunicorn workers but waits workers with old code to finish processing.

    """
    bundle_media()

    #if env.envname in ('uat', 'staging', 'live'):
    f = StringIO()
    get("/opt/myapp/%s/pid" % env.envname,f)
    pid = re.search(r'\d+',f.getvalue()).group()
    run("ps aux | grep gunicorn | grep %s | grep master | grep -v grep | awk '{print $2}'" % env.envname)
    run("kill -HUP %s" % pid)


@env_update
def restart():
    """
    Hard restarts specified servers

    """
    bundle_media()
    run("ps aux | grep gunicorn | grep %s | grep master | grep -v grep | awk '{print $2}'" % env.envname)
    run("supervisorctl stop myapp-%s && supervisorctl start MyApp-%s" % (env.envname,env.envname))
    run("ps aux | grep gunicorn | grep %s | grep master | grep -v grep | awk '{print $2}'" % env.envname)


def help():
    fabric_functions = ['run', 'execute', 'local', 'func_wrapper']
    functions = set([obj.__name__ if obj.__name__ not in fabric_functions else '' for name, obj in
                     inspect.getmembers(sys.modules[__name__]) if inspect.isfunction(obj)])
    functions.remove('')
    print "usage: \n  fab [host/grcompany of hosts] [commands] (optional command with arguments command:kwarg=val,arg1,arg2,arg3)"
    print "\navailable servers:"
    list_hosts()
    print "\ncommands:\n  %s" % ', '.join([a for a in functions])
    print "\nexamples:\n  staging svnxapp:app=holdings_and_quotes,layout.py,permissions.py restart"
    print "  fab test restart"
    print "  fab staging svnxapp:app=holdings_and_quotes,lib/quote.py,layout.py,models.py"
    print "  fab staging svnxapp:app=holdings_and_quotes,lib/quote.py restart"
    print "  fab test build"
    print "  fab test bundle_media restart"
    print " For svnx whole app (comma in the end):"
    print "  fab test svnxapp:app=medrep,"
    print " For global lib:"
    print "  fab test svnxlib"
    print " For whole global media:"
    print "  fab test svnxmedia:"
    print " For global media file:"
    print "  fab test svnxmedia:javascript"
    print "  fab test svnxmedia:javascript/company/checklist.js"
    print "\nIf .js file in args like : fab staging svnxapp:app=holdings_and_quotes,media/js/quote.js,layout.py,models.py"
    print "It will bundle media itself"
    print "Restart test staging without params:\n  fab restart"
    for f in functions:
        print f
        print globals()[f].__doc__
        print "\n"



@env_update
def accessguni():
    run("tail /var/log/myapp/access-%s.log" % env.envname.upper() )

@env_update
def accessgunilive():
    run("tail -f /var/log/myapp/access-%s.log" % env.envname.upper() )

@env_update
def errorguni():
    run("tail /var/log/myapp/error-%s.log" % env.envname.upper() )

@env_update
def errorgunilive():
    run("tail -f /var/log/myapp/error.log" % env.envname.upper() )

def hostname():
    run('uname -a')

@env_update
def uptime():
    run('uptime')

comodo-positive-ssl-sec_error_unknown_issuer-firefox

Tempted by post on official google admins blog http://googlewebmastercentral.blogspot.co.uk/2014/08/https-as-ranking-signal.html it's about higher ranking when having https. So I have recently bought cheap ssl certificate from known issuer COMODO and upgraded my server with it but it caused several problems when I redirected from 80 to 443 some 3rd party apps (like userena for django) created loops but I found solution quickly it was to put USERENA_USE_HTTPS=True in settings.py but it still sends more forgot password mails than it should but it's different story.

Comming back to point if you are experiencing error "sec_error_unknown_issuer" in firefox after implementing COMODO Positive-SSL certificate this solution may help you. Don't know exactly but I read that it has something to do with chain certificates they should be bundled inside you website certificate. And remember you have to do it in right order because first one should be your servers certificate cause it's signed with key) and nginx or whatever you use may have problems after reloading service.

My command for bundling was:

cat exerceo_pl.crt COMODORSADomainValidationSecureServerCA.crt COMODORSAAddTrustCA.crt AddTrustExternalCARoot.crt > exerceo_pl.bundled.crt after that I had just to point my nginx.conf to new bundled certificate. And after all reaload of nginx fixed my unknow issuer firefox problem.

xrandr tips

adding resolution 1)generate mode cvt 2560 1440 60 ➜ blog git:(website) ✗ cvt 2560 1440 60 # 2560x1440 59.96 Hz (CVT 3.69M9) hsync: 89.52 kHz; pclk: 312.25 MHz Modeline "2560x1440_60.00" 312.25 2560 2752 3024 3488 1440 1443 1448 1493 -hsync +vsync

2)add it to xrandr sudo xrandr --newmode "2560x1440_60.00" 312.25 2560 2752 3024 3488 1440 1443 1448 1493 -hsync +vsync setting resolution

3)sudo xrandr --addmode VGA-0 "1680x1050_60.00"

xrandr -q xrandr --verbose

xrandr --output HDMI-0 --mode 2560x1440

irst generate a "modeline" by using cvt Syntax is: cvt width height refreshrate

cvt 1680 1050 60 this gives you:

# 1680x1050 59.95 Hz (CVT 1.76MA) hsync: 65.29 kHz; pclk: 146.25 MHz Modeline "1680x1050_60.00" 146.25 1680 1784 1960 2240 1050 1053 1059 1089 -hsync +vsync Now tell this to xrandr:

sudo xrandr --newmode "1680x1050_60.00" 146.25 1680 1784 1960 2240 1050 1053 1059 1089 -

First clone the two screens, (the smaller screen will display the top left portion of the virtual screen)

xrandr --output VGA --auto --right-of LVDS

xrandr --output LVDS --mode 1280x800

xrandr --output LVDS --mode 1280x800 --rate 75

xrandr --output LVDS --auto

xrandr --output LVDS --off --output HDMI-0 --auto

xrandr --output VGA1 --mode 1024x768 --rate 60

#Laptop right extra Monitor Left

xrandr --output VGA1 --left-of LVDS1

#Laptop left extra Monitor right

xrandr --output LVDS1 --left-of VGA1

#This is to set your primary monitor.

#This sets your laptop monitor as your primary monitor.

xrandr --output LVDS1 --primary

#This sets your VGA monitor as your primary monitor.

xrandr --output VGA1 --primary

xrandr --output VGA1 --mode 1024x768 --rate 60

xrandr --pos <x>x<y>

$ xrandr --left-of <output>

$ xrandr --right-of <output>

$ xrandr --above <output>

$ xrandr --below <output>

Option '-pos' is more flexible which can place output to anywhere, for example:

$ xrandr --output VGA1 --pos 200x200 $ xrandr --output LVDS1 --pos 400x500

xrandr -o right